2023-07-16 14:15:13,142 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6 2023-07-16 14:15:13,159 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-16 14:15:13,178 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 14:15:13,178 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75, deleteOnExit=true 2023-07-16 14:15:13,178 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 14:15:13,179 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/test.cache.data in system properties and HBase conf 2023-07-16 14:15:13,179 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 14:15:13,180 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir in system properties and HBase conf 2023-07-16 14:15:13,180 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 14:15:13,180 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 14:15:13,180 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 14:15:13,331 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-16 14:15:13,790 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 14:15:13,794 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 14:15:13,795 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 14:15:13,795 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 14:15:13,795 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 14:15:13,796 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 14:15:13,796 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 14:15:13,796 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 14:15:13,797 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 14:15:13,797 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 14:15:13,797 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/nfs.dump.dir in system properties and HBase conf 2023-07-16 14:15:13,798 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/java.io.tmpdir in system properties and HBase conf 2023-07-16 14:15:13,798 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 14:15:13,798 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 14:15:13,798 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 14:15:14,315 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 14:15:14,320 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 14:15:14,645 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-16 14:15:14,853 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-16 14:15:14,868 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:14,910 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:14,946 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/java.io.tmpdir/Jetty_localhost_39429_hdfs____tdftvj/webapp 2023-07-16 14:15:15,091 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39429 2023-07-16 14:15:15,101 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 14:15:15,101 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 14:15:15,668 WARN [Listener at localhost/42609] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:15,810 WARN [Listener at localhost/42609] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:15,915 WARN [Listener at localhost/42609] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:15,923 INFO [Listener at localhost/42609] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:15,931 INFO [Listener at localhost/42609] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/java.io.tmpdir/Jetty_localhost_40585_datanode____.odozs3/webapp 2023-07-16 14:15:16,053 INFO [Listener at localhost/42609] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40585 2023-07-16 14:15:16,520 WARN [Listener at localhost/34151] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:16,560 WARN [Listener at localhost/34151] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:16,564 WARN [Listener at localhost/34151] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:16,566 INFO [Listener at localhost/34151] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:16,572 INFO [Listener at localhost/34151] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/java.io.tmpdir/Jetty_localhost_40947_datanode____s21fmv/webapp 2023-07-16 14:15:16,674 INFO [Listener at localhost/34151] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40947 2023-07-16 14:15:16,694 WARN [Listener at localhost/40331] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:16,720 WARN [Listener at localhost/40331] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:16,723 WARN [Listener at localhost/40331] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:16,725 INFO [Listener at localhost/40331] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:16,731 INFO [Listener at localhost/40331] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/java.io.tmpdir/Jetty_localhost_41801_datanode____.dshu2l/webapp 2023-07-16 14:15:16,862 INFO [Listener at localhost/40331] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41801 2023-07-16 14:15:16,874 WARN [Listener at localhost/36419] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:17,108 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x916c4aaf337bb4b3: Processing first storage report for DS-b2b4229c-71db-43a2-ad3d-ead4729d004b from datanode bad6bfd1-28e8-435d-86b8-123c65c90635 2023-07-16 14:15:17,110 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x916c4aaf337bb4b3: from storage DS-b2b4229c-71db-43a2-ad3d-ead4729d004b node DatanodeRegistration(127.0.0.1:42869, datanodeUuid=bad6bfd1-28e8-435d-86b8-123c65c90635, infoPort=39071, infoSecurePort=0, ipcPort=40331, storageInfo=lv=-57;cid=testClusterID;nsid=1658842142;c=1689516914396), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-16 14:15:17,111 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x953b722326a38b54: Processing first storage report for DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3 from datanode 809c40bf-7c98-4302-803d-1582ef65a464 2023-07-16 14:15:17,111 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x953b722326a38b54: from storage DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3 node DatanodeRegistration(127.0.0.1:40055, datanodeUuid=809c40bf-7c98-4302-803d-1582ef65a464, infoPort=44025, infoSecurePort=0, ipcPort=34151, storageInfo=lv=-57;cid=testClusterID;nsid=1658842142;c=1689516914396), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:17,111 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x506d2e0f5ce428db: Processing first storage report for DS-295e8c93-396b-49b2-b552-da44a87ff94f from datanode 38781a8c-81fb-4e0c-8d51-66ba030550e2 2023-07-16 14:15:17,111 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x506d2e0f5ce428db: from storage DS-295e8c93-396b-49b2-b552-da44a87ff94f node DatanodeRegistration(127.0.0.1:34829, datanodeUuid=38781a8c-81fb-4e0c-8d51-66ba030550e2, infoPort=33075, infoSecurePort=0, ipcPort=36419, storageInfo=lv=-57;cid=testClusterID;nsid=1658842142;c=1689516914396), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:17,112 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x916c4aaf337bb4b3: Processing first storage report for DS-c74c58e1-2665-47ba-b1d8-98303e6d229d from datanode bad6bfd1-28e8-435d-86b8-123c65c90635 2023-07-16 14:15:17,112 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x916c4aaf337bb4b3: from storage DS-c74c58e1-2665-47ba-b1d8-98303e6d229d node DatanodeRegistration(127.0.0.1:42869, datanodeUuid=bad6bfd1-28e8-435d-86b8-123c65c90635, infoPort=39071, infoSecurePort=0, ipcPort=40331, storageInfo=lv=-57;cid=testClusterID;nsid=1658842142;c=1689516914396), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:17,112 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x953b722326a38b54: Processing first storage report for DS-adf67777-9ca3-4fac-949c-10a6ee492d77 from datanode 809c40bf-7c98-4302-803d-1582ef65a464 2023-07-16 14:15:17,112 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x953b722326a38b54: from storage DS-adf67777-9ca3-4fac-949c-10a6ee492d77 node DatanodeRegistration(127.0.0.1:40055, datanodeUuid=809c40bf-7c98-4302-803d-1582ef65a464, infoPort=44025, infoSecurePort=0, ipcPort=34151, storageInfo=lv=-57;cid=testClusterID;nsid=1658842142;c=1689516914396), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:17,112 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x506d2e0f5ce428db: Processing first storage report for DS-0e7256ec-2b7f-4cad-a11f-69dc103796a0 from datanode 38781a8c-81fb-4e0c-8d51-66ba030550e2 2023-07-16 14:15:17,113 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x506d2e0f5ce428db: from storage DS-0e7256ec-2b7f-4cad-a11f-69dc103796a0 node DatanodeRegistration(127.0.0.1:34829, datanodeUuid=38781a8c-81fb-4e0c-8d51-66ba030550e2, infoPort=33075, infoSecurePort=0, ipcPort=36419, storageInfo=lv=-57;cid=testClusterID;nsid=1658842142;c=1689516914396), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 14:15:17,316 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6 2023-07-16 14:15:17,398 INFO [Listener at localhost/36419] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/zookeeper_0, clientPort=63627, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 14:15:17,420 INFO [Listener at localhost/36419] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63627 2023-07-16 14:15:17,428 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:17,431 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:18,155 INFO [Listener at localhost/36419] util.FSUtils(471): Created version file at hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1 with version=8 2023-07-16 14:15:18,155 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/hbase-staging 2023-07-16 14:15:18,169 DEBUG [Listener at localhost/36419] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 14:15:18,169 DEBUG [Listener at localhost/36419] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 14:15:18,169 DEBUG [Listener at localhost/36419] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 14:15:18,169 DEBUG [Listener at localhost/36419] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 14:15:18,613 INFO [Listener at localhost/36419] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-16 14:15:19,364 INFO [Listener at localhost/36419] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:19,413 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:19,414 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:19,414 INFO [Listener at localhost/36419] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:19,414 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:19,414 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:19,575 INFO [Listener at localhost/36419] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:19,674 DEBUG [Listener at localhost/36419] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-16 14:15:19,778 INFO [Listener at localhost/36419] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41971 2023-07-16 14:15:19,793 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:19,795 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:19,824 INFO [Listener at localhost/36419] zookeeper.RecoverableZooKeeper(93): Process identifier=master:41971 connecting to ZooKeeper ensemble=127.0.0.1:63627 2023-07-16 14:15:19,887 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:419710x0, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:19,895 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:41971-0x1016e7cc5860000 connected 2023-07-16 14:15:19,931 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:19,932 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:19,937 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:19,954 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41971 2023-07-16 14:15:19,957 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41971 2023-07-16 14:15:19,958 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41971 2023-07-16 14:15:19,959 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41971 2023-07-16 14:15:19,959 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41971 2023-07-16 14:15:20,005 INFO [Listener at localhost/36419] log.Log(170): Logging initialized @7594ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-16 14:15:20,175 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:20,176 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:20,177 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:20,179 INFO [Listener at localhost/36419] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 14:15:20,179 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:20,179 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:20,184 INFO [Listener at localhost/36419] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:20,251 INFO [Listener at localhost/36419] http.HttpServer(1146): Jetty bound to port 44773 2023-07-16 14:15:20,253 INFO [Listener at localhost/36419] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:20,289 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,292 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@247d4b40{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:20,293 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,293 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2e144596{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:20,386 INFO [Listener at localhost/36419] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:20,402 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:20,402 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:20,404 INFO [Listener at localhost/36419] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:20,412 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,442 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2494f2d0{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 14:15:20,458 INFO [Listener at localhost/36419] server.AbstractConnector(333): Started ServerConnector@4811724e{HTTP/1.1, (http/1.1)}{0.0.0.0:44773} 2023-07-16 14:15:20,458 INFO [Listener at localhost/36419] server.Server(415): Started @8048ms 2023-07-16 14:15:20,464 INFO [Listener at localhost/36419] master.HMaster(444): hbase.rootdir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1, hbase.cluster.distributed=false 2023-07-16 14:15:20,563 INFO [Listener at localhost/36419] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:20,563 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,564 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,564 INFO [Listener at localhost/36419] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:20,564 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,564 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:20,574 INFO [Listener at localhost/36419] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:20,577 INFO [Listener at localhost/36419] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43741 2023-07-16 14:15:20,581 INFO [Listener at localhost/36419] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:20,594 DEBUG [Listener at localhost/36419] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:20,595 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:20,598 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:20,601 INFO [Listener at localhost/36419] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43741 connecting to ZooKeeper ensemble=127.0.0.1:63627 2023-07-16 14:15:20,612 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:437410x0, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:20,614 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:437410x0, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:20,614 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43741-0x1016e7cc5860001 connected 2023-07-16 14:15:20,616 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:20,617 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:20,619 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43741 2023-07-16 14:15:20,622 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43741 2023-07-16 14:15:20,626 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43741 2023-07-16 14:15:20,628 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43741 2023-07-16 14:15:20,628 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43741 2023-07-16 14:15:20,631 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:20,632 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:20,632 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:20,634 INFO [Listener at localhost/36419] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:20,634 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:20,634 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:20,634 INFO [Listener at localhost/36419] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:20,637 INFO [Listener at localhost/36419] http.HttpServer(1146): Jetty bound to port 42355 2023-07-16 14:15:20,637 INFO [Listener at localhost/36419] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:20,650 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,651 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3b1f63f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:20,652 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,652 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3063b687{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:20,669 INFO [Listener at localhost/36419] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:20,671 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:20,671 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:20,671 INFO [Listener at localhost/36419] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:20,675 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,680 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5c09b86c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:20,682 INFO [Listener at localhost/36419] server.AbstractConnector(333): Started ServerConnector@600533b4{HTTP/1.1, (http/1.1)}{0.0.0.0:42355} 2023-07-16 14:15:20,682 INFO [Listener at localhost/36419] server.Server(415): Started @8272ms 2023-07-16 14:15:20,701 INFO [Listener at localhost/36419] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:20,702 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,702 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,702 INFO [Listener at localhost/36419] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:20,703 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,703 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:20,703 INFO [Listener at localhost/36419] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:20,705 INFO [Listener at localhost/36419] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34921 2023-07-16 14:15:20,706 INFO [Listener at localhost/36419] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:20,707 DEBUG [Listener at localhost/36419] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:20,708 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:20,711 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:20,712 INFO [Listener at localhost/36419] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34921 connecting to ZooKeeper ensemble=127.0.0.1:63627 2023-07-16 14:15:20,716 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:349210x0, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:20,717 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34921-0x1016e7cc5860002 connected 2023-07-16 14:15:20,718 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:20,718 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:20,719 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:20,720 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34921 2023-07-16 14:15:20,722 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34921 2023-07-16 14:15:20,726 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34921 2023-07-16 14:15:20,727 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34921 2023-07-16 14:15:20,727 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34921 2023-07-16 14:15:20,730 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:20,730 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:20,730 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:20,730 INFO [Listener at localhost/36419] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:20,731 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:20,731 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:20,731 INFO [Listener at localhost/36419] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:20,731 INFO [Listener at localhost/36419] http.HttpServer(1146): Jetty bound to port 39807 2023-07-16 14:15:20,732 INFO [Listener at localhost/36419] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:20,737 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,737 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@13dfbe27{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:20,738 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,738 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33aee7d7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:20,750 INFO [Listener at localhost/36419] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:20,751 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:20,751 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:20,751 INFO [Listener at localhost/36419] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:20,752 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,753 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@21d37555{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:20,754 INFO [Listener at localhost/36419] server.AbstractConnector(333): Started ServerConnector@74f1938f{HTTP/1.1, (http/1.1)}{0.0.0.0:39807} 2023-07-16 14:15:20,754 INFO [Listener at localhost/36419] server.Server(415): Started @8344ms 2023-07-16 14:15:20,767 INFO [Listener at localhost/36419] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:20,767 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,767 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,767 INFO [Listener at localhost/36419] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:20,767 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:20,768 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:20,768 INFO [Listener at localhost/36419] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:20,769 INFO [Listener at localhost/36419] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41933 2023-07-16 14:15:20,770 INFO [Listener at localhost/36419] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:20,771 DEBUG [Listener at localhost/36419] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:20,772 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:20,773 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:20,775 INFO [Listener at localhost/36419] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41933 connecting to ZooKeeper ensemble=127.0.0.1:63627 2023-07-16 14:15:20,782 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:419330x0, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:20,784 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:419330x0, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:20,785 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:419330x0, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:20,786 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:419330x0, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:20,787 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41933-0x1016e7cc5860003 connected 2023-07-16 14:15:20,788 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41933 2023-07-16 14:15:20,789 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41933 2023-07-16 14:15:20,789 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41933 2023-07-16 14:15:20,790 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41933 2023-07-16 14:15:20,791 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41933 2023-07-16 14:15:20,794 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:20,794 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:20,795 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:20,795 INFO [Listener at localhost/36419] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:20,796 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:20,796 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:20,796 INFO [Listener at localhost/36419] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:20,797 INFO [Listener at localhost/36419] http.HttpServer(1146): Jetty bound to port 37263 2023-07-16 14:15:20,797 INFO [Listener at localhost/36419] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:20,806 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,806 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ad75e86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:20,806 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,807 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@ac4d373{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:20,815 INFO [Listener at localhost/36419] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:20,816 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:20,816 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:20,816 INFO [Listener at localhost/36419] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 14:15:20,819 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:20,821 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@188ba91{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:20,822 INFO [Listener at localhost/36419] server.AbstractConnector(333): Started ServerConnector@532a3d3{HTTP/1.1, (http/1.1)}{0.0.0.0:37263} 2023-07-16 14:15:20,823 INFO [Listener at localhost/36419] server.Server(415): Started @8412ms 2023-07-16 14:15:20,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:20,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@54097cdd{HTTP/1.1, (http/1.1)}{0.0.0.0:32983} 2023-07-16 14:15:20,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8428ms 2023-07-16 14:15:20,838 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:20,850 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 14:15:20,852 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:20,873 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:20,873 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:20,873 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:20,873 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:20,875 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:20,875 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:20,877 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,41971,1689516918385 from backup master directory 2023-07-16 14:15:20,877 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:20,882 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:20,882 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 14:15:20,883 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:20,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:20,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-16 14:15:20,888 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-16 14:15:20,998 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/hbase.id with ID: ca8f0aed-5757-4c7c-b261-87885dcb06d4 2023-07-16 14:15:21,053 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:21,108 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:21,172 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x041f5a35 to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:21,205 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4425f712, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:21,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:21,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 14:15:21,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-16 14:15:21,261 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-16 14:15:21,263 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 14:15:21,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 14:15:21,270 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:21,312 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store-tmp 2023-07-16 14:15:21,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:21,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 14:15:21,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:21,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:21,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 14:15:21,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:21,362 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:21,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:21,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/WALs/jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:21,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41971%2C1689516918385, suffix=, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/WALs/jenkins-hbase4.apache.org,41971,1689516918385, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/oldWALs, maxLogs=10 2023-07-16 14:15:21,462 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:21,462 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:21,462 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:21,472 DEBUG [RS-EventLoopGroup-5-3] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-16 14:15:21,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/WALs/jenkins-hbase4.apache.org,41971,1689516918385/jenkins-hbase4.apache.org%2C41971%2C1689516918385.1689516921401 2023-07-16 14:15:21,562 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK], DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK]] 2023-07-16 14:15:21,563 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:21,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:21,569 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:21,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:21,683 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:21,712 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 14:15:21,784 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 14:15:21,798 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:21,804 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:21,805 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:21,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:21,832 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:21,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10072086880, jitterRate=-0.06196381151676178}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:21,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:21,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 14:15:21,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 14:15:21,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 14:15:21,864 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 14:15:21,867 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-16 14:15:21,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 49 msec 2023-07-16 14:15:21,916 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 14:15:21,943 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 14:15:21,949 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 14:15:21,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 14:15:21,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 14:15:21,976 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 14:15:21,979 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:21,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 14:15:21,981 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 14:15:22,000 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 14:15:22,005 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:22,005 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:22,005 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:22,005 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:22,005 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:22,006 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,41971,1689516918385, sessionid=0x1016e7cc5860000, setting cluster-up flag (Was=false) 2023-07-16 14:15:22,033 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:22,040 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 14:15:22,041 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:22,048 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:22,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 14:15:22,056 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:22,059 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.hbase-snapshot/.tmp 2023-07-16 14:15:22,129 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(951): ClusterId : ca8f0aed-5757-4c7c-b261-87885dcb06d4 2023-07-16 14:15:22,130 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(951): ClusterId : ca8f0aed-5757-4c7c-b261-87885dcb06d4 2023-07-16 14:15:22,145 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(951): ClusterId : ca8f0aed-5757-4c7c-b261-87885dcb06d4 2023-07-16 14:15:22,146 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:22,148 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:22,148 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:22,157 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:22,157 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:22,158 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:22,158 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:22,158 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:22,158 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:22,162 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:22,162 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:22,165 DEBUG [RS:0;jenkins-hbase4:43741] zookeeper.ReadOnlyZKClient(139): Connect 0x79b6f309 to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:22,168 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:22,174 DEBUG [RS:2;jenkins-hbase4:41933] zookeeper.ReadOnlyZKClient(139): Connect 0x3f90762c to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:22,174 DEBUG [RS:1;jenkins-hbase4:34921] zookeeper.ReadOnlyZKClient(139): Connect 0x4e304859 to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:22,182 DEBUG [RS:0;jenkins-hbase4:43741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7771124f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:22,184 DEBUG [RS:0;jenkins-hbase4:43741] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@734e9879, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:22,186 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 14:15:22,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 14:15:22,212 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 14:15:22,213 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 14:15:22,222 DEBUG [RS:1;jenkins-hbase4:34921] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40f7afef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:22,222 DEBUG [RS:1;jenkins-hbase4:34921] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44c82329, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:22,222 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:22,227 DEBUG [RS:2;jenkins-hbase4:41933] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31579fae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:22,227 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43741 2023-07-16 14:15:22,227 DEBUG [RS:2;jenkins-hbase4:41933] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@706a9595, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:22,233 INFO [RS:0;jenkins-hbase4:43741] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:22,233 INFO [RS:0;jenkins-hbase4:43741] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:22,233 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:22,237 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:43741, startcode=1689516920562 2023-07-16 14:15:22,245 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41933 2023-07-16 14:15:22,246 INFO [RS:2;jenkins-hbase4:41933] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:22,251 INFO [RS:2;jenkins-hbase4:41933] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:22,251 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:22,252 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34921 2023-07-16 14:15:22,252 INFO [RS:1;jenkins-hbase4:34921] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:22,252 INFO [RS:1;jenkins-hbase4:34921] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:22,252 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:22,254 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:41933, startcode=1689516920766 2023-07-16 14:15:22,259 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:34921, startcode=1689516920700 2023-07-16 14:15:22,292 DEBUG [RS:1;jenkins-hbase4:34921] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:22,294 DEBUG [RS:0;jenkins-hbase4:43741] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:22,292 DEBUG [RS:2;jenkins-hbase4:41933] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:22,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:22,385 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60119, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:22,386 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58775, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:22,385 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58297, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:22,405 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:22,423 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:22,426 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:22,452 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 14:15:22,454 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 14:15:22,454 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 14:15:22,455 WARN [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 14:15:22,454 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 14:15:22,455 WARN [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 14:15:22,455 WARN [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 14:15:22,459 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 14:15:22,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 14:15:22,460 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 14:15:22,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:22,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:22,462 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:22,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:22,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 14:15:22,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:22,463 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689516952482 2023-07-16 14:15:22,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 14:15:22,490 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:22,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 14:15:22,493 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 14:15:22,496 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:22,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 14:15:22,509 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 14:15:22,510 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 14:15:22,510 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 14:15:22,511 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 14:15:22,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 14:15:22,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 14:15:22,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 14:15:22,522 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 14:15:22,528 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516922526,5,FailOnTimeoutGroup] 2023-07-16 14:15:22,542 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516922529,5,FailOnTimeoutGroup] 2023-07-16 14:15:22,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 14:15:22,546 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,547 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,556 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:34921, startcode=1689516920700 2023-07-16 14:15:22,559 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:41933, startcode=1689516920766 2023-07-16 14:15:22,559 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:43741, startcode=1689516920562 2023-07-16 14:15:22,564 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,565 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:22,566 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 14:15:22,572 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,572 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:22,572 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 14:15:22,573 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:22,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 14:15:22,583 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1 2023-07-16 14:15:22,583 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1 2023-07-16 14:15:22,583 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1 2023-07-16 14:15:22,583 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42609 2023-07-16 14:15:22,583 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42609 2023-07-16 14:15:22,583 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42609 2023-07-16 14:15:22,583 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44773 2023-07-16 14:15:22,583 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44773 2023-07-16 14:15:22,584 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44773 2023-07-16 14:15:22,601 DEBUG [RS:0;jenkins-hbase4:43741] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,601 DEBUG [RS:1;jenkins-hbase4:34921] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,602 DEBUG [RS:2;jenkins-hbase4:41933] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,602 WARN [RS:0;jenkins-hbase4:43741] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:22,602 WARN [RS:1;jenkins-hbase4:34921] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:22,604 INFO [RS:0;jenkins-hbase4:43741] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:22,602 WARN [RS:2;jenkins-hbase4:41933] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:22,605 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,605 INFO [RS:1;jenkins-hbase4:34921] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:22,607 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,607 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:22,605 INFO [RS:2;jenkins-hbase4:41933] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:22,608 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,621 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43741,1689516920562] 2023-07-16 14:15:22,621 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34921,1689516920700] 2023-07-16 14:15:22,622 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41933,1689516920766] 2023-07-16 14:15:22,651 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:22,652 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:22,653 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1 2023-07-16 14:15:22,655 DEBUG [RS:2;jenkins-hbase4:41933] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,656 DEBUG [RS:0;jenkins-hbase4:43741] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,655 DEBUG [RS:1;jenkins-hbase4:34921] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,657 DEBUG [RS:2;jenkins-hbase4:41933] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,658 DEBUG [RS:1;jenkins-hbase4:34921] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,658 DEBUG [RS:2;jenkins-hbase4:41933] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,659 DEBUG [RS:1;jenkins-hbase4:34921] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,662 DEBUG [RS:0;jenkins-hbase4:43741] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,663 DEBUG [RS:0;jenkins-hbase4:43741] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,675 DEBUG [RS:1;jenkins-hbase4:34921] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:22,681 DEBUG [RS:2;jenkins-hbase4:41933] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:22,681 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:22,696 INFO [RS:0;jenkins-hbase4:43741] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:22,697 INFO [RS:1;jenkins-hbase4:34921] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:22,696 INFO [RS:2;jenkins-hbase4:41933] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:22,751 INFO [RS:0;jenkins-hbase4:43741] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:22,756 INFO [RS:1;jenkins-hbase4:34921] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:22,756 INFO [RS:2;jenkins-hbase4:41933] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:22,760 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:22,774 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:22,774 INFO [RS:2;jenkins-hbase4:41933] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:22,775 INFO [RS:1;jenkins-hbase4:34921] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:22,775 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,775 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,774 INFO [RS:0;jenkins-hbase4:43741] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:22,776 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,779 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:22,783 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info 2023-07-16 14:15:22,783 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:22,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:22,785 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:22,789 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:22,790 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:22,790 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:22,793 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:22,791 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:22,803 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:22,814 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,814 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,815 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:22,815 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:22,815 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,816 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,815 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,819 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,819 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,817 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,819 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,819 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,819 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,819 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:22,820 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 DEBUG [RS:2;jenkins-hbase4:41933] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,818 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 DEBUG [RS:0;jenkins-hbase4:43741] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,819 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,820 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:22,821 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,821 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:22,821 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,821 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,821 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,821 DEBUG [RS:1;jenkins-hbase4:34921] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:22,828 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740 2023-07-16 14:15:22,829 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740 2023-07-16 14:15:22,833 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:22,839 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,839 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,840 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,841 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:22,847 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,847 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,847 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,851 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,852 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,852 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,857 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:22,858 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10992979200, jitterRate=0.023800969123840332}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:22,859 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:22,859 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:22,859 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:22,859 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:22,859 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:22,859 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:22,861 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:22,861 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:22,870 INFO [RS:0;jenkins-hbase4:43741] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:22,872 INFO [RS:2;jenkins-hbase4:41933] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:22,874 INFO [RS:1;jenkins-hbase4:34921] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:22,874 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:22,875 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 14:15:22,886 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34921,1689516920700-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,886 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41933,1689516920766-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,886 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43741,1689516920562-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:22,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 14:15:22,920 INFO [RS:1;jenkins-hbase4:34921] regionserver.Replication(203): jenkins-hbase4.apache.org,34921,1689516920700 started 2023-07-16 14:15:22,920 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34921,1689516920700, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34921, sessionid=0x1016e7cc5860002 2023-07-16 14:15:22,920 INFO [RS:2;jenkins-hbase4:41933] regionserver.Replication(203): jenkins-hbase4.apache.org,41933,1689516920766 started 2023-07-16 14:15:22,920 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:22,920 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41933,1689516920766, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41933, sessionid=0x1016e7cc5860003 2023-07-16 14:15:22,920 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:22,920 DEBUG [RS:1;jenkins-hbase4:34921] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,921 DEBUG [RS:2;jenkins-hbase4:41933] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,922 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 14:15:22,922 DEBUG [RS:1;jenkins-hbase4:34921] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34921,1689516920700' 2023-07-16 14:15:22,924 DEBUG [RS:2;jenkins-hbase4:41933] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41933,1689516920766' 2023-07-16 14:15:22,926 DEBUG [RS:1;jenkins-hbase4:34921] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:22,926 DEBUG [RS:2;jenkins-hbase4:41933] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:22,930 INFO [RS:0;jenkins-hbase4:43741] regionserver.Replication(203): jenkins-hbase4.apache.org,43741,1689516920562 started 2023-07-16 14:15:22,931 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43741,1689516920562, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43741, sessionid=0x1016e7cc5860001 2023-07-16 14:15:22,931 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:22,931 DEBUG [RS:0;jenkins-hbase4:43741] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,931 DEBUG [RS:0;jenkins-hbase4:43741] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43741,1689516920562' 2023-07-16 14:15:22,931 DEBUG [RS:0;jenkins-hbase4:43741] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:22,931 DEBUG [RS:1;jenkins-hbase4:34921] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:22,931 DEBUG [RS:2;jenkins-hbase4:41933] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:22,932 DEBUG [RS:0;jenkins-hbase4:43741] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:22,932 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:22,932 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:22,932 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:22,933 DEBUG [RS:1;jenkins-hbase4:34921] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:22,933 DEBUG [RS:1;jenkins-hbase4:34921] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34921,1689516920700' 2023-07-16 14:15:22,933 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:22,933 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 14:15:22,932 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:22,934 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:22,933 DEBUG [RS:1;jenkins-hbase4:34921] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:22,934 DEBUG [RS:2;jenkins-hbase4:41933] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:22,939 DEBUG [RS:2;jenkins-hbase4:41933] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41933,1689516920766' 2023-07-16 14:15:22,939 DEBUG [RS:2;jenkins-hbase4:41933] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:22,939 DEBUG [RS:2;jenkins-hbase4:41933] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:22,940 DEBUG [RS:1;jenkins-hbase4:34921] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:22,940 DEBUG [RS:2;jenkins-hbase4:41933] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:22,940 INFO [RS:2;jenkins-hbase4:41933] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:22,940 INFO [RS:2;jenkins-hbase4:41933] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:22,940 DEBUG [RS:1;jenkins-hbase4:34921] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:22,934 DEBUG [RS:0;jenkins-hbase4:43741] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:22,942 DEBUG [RS:0;jenkins-hbase4:43741] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43741,1689516920562' 2023-07-16 14:15:22,943 DEBUG [RS:0;jenkins-hbase4:43741] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:22,940 INFO [RS:1;jenkins-hbase4:34921] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:22,943 INFO [RS:1;jenkins-hbase4:34921] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:22,943 DEBUG [RS:0;jenkins-hbase4:43741] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:22,944 DEBUG [RS:0;jenkins-hbase4:43741] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:22,944 INFO [RS:0;jenkins-hbase4:43741] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:22,944 INFO [RS:0;jenkins-hbase4:43741] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:23,063 INFO [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34921%2C1689516920700, suffix=, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,34921,1689516920700, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs, maxLogs=32 2023-07-16 14:15:23,067 INFO [RS:2;jenkins-hbase4:41933] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41933%2C1689516920766, suffix=, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,41933,1689516920766, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs, maxLogs=32 2023-07-16 14:15:23,069 INFO [RS:0;jenkins-hbase4:43741] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43741%2C1689516920562, suffix=, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,43741,1689516920562, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs, maxLogs=32 2023-07-16 14:15:23,086 DEBUG [jenkins-hbase4:41971] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 14:15:23,134 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:23,137 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:23,137 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:23,138 DEBUG [jenkins-hbase4:41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:23,140 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:23,143 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:23,143 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:23,144 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:23,144 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:23,145 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:23,145 DEBUG [jenkins-hbase4:41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:23,145 DEBUG [jenkins-hbase4:41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:23,145 DEBUG [jenkins-hbase4:41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:23,145 DEBUG [jenkins-hbase4:41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:23,161 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34921,1689516920700, state=OPENING 2023-07-16 14:15:23,184 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 14:15:23,185 INFO [RS:0;jenkins-hbase4:43741] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,43741,1689516920562/jenkins-hbase4.apache.org%2C43741%2C1689516920562.1689516923074 2023-07-16 14:15:23,185 INFO [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,34921,1689516920700/jenkins-hbase4.apache.org%2C34921%2C1689516920700.1689516923075 2023-07-16 14:15:23,186 INFO [RS:2;jenkins-hbase4:41933] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,41933,1689516920766/jenkins-hbase4.apache.org%2C41933%2C1689516920766.1689516923074 2023-07-16 14:15:23,186 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:23,187 DEBUG [RS:0;jenkins-hbase4:43741] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK], DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK], DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK]] 2023-07-16 14:15:23,187 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:23,192 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:23,199 DEBUG [RS:2;jenkins-hbase4:41933] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK], DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK]] 2023-07-16 14:15:23,199 DEBUG [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK], DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK]] 2023-07-16 14:15:23,405 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:23,408 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:23,413 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59548, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:23,425 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 14:15:23,426 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:23,432 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34921%2C1689516920700.meta, suffix=.meta, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,34921,1689516920700, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs, maxLogs=32 2023-07-16 14:15:23,463 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:23,467 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:23,468 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:23,475 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,34921,1689516920700/jenkins-hbase4.apache.org%2C34921%2C1689516920700.meta.1689516923440.meta 2023-07-16 14:15:23,478 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK], DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK]] 2023-07-16 14:15:23,479 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:23,481 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:23,485 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 14:15:23,488 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 14:15:23,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 14:15:23,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:23,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 14:15:23,495 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 14:15:23,500 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:23,502 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info 2023-07-16 14:15:23,502 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info 2023-07-16 14:15:23,503 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:23,504 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:23,504 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:23,506 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:23,506 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:23,507 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:23,508 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:23,508 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:23,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table 2023-07-16 14:15:23,510 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table 2023-07-16 14:15:23,511 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:23,511 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:23,513 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740 2023-07-16 14:15:23,517 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740 2023-07-16 14:15:23,521 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:23,524 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:23,526 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9712428800, jitterRate=-0.09545958042144775}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:23,526 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:23,545 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689516923392 2023-07-16 14:15:23,565 WARN [ReadOnlyZKClient-127.0.0.1:63627@0x041f5a35] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 14:15:23,580 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 14:15:23,582 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 14:15:23,582 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34921,1689516920700, state=OPEN 2023-07-16 14:15:23,585 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 14:15:23,585 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:23,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 14:15:23,590 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34921,1689516920700 in 393 msec 2023-07-16 14:15:23,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 14:15:23,597 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 689 msec 2023-07-16 14:15:23,605 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:23,605 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.3660 sec 2023-07-16 14:15:23,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689516923606, completionTime=-1 2023-07-16 14:15:23,606 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 14:15:23,607 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 14:15:23,608 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59564, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:23,631 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:23,646 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 14:15:23,648 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 14:15:23,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 14:15:23,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689516983676 2023-07-16 14:15:23,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689517043676 2023-07-16 14:15:23,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 69 msec 2023-07-16 14:15:23,703 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41971,1689516918385-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:23,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41971,1689516918385-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:23,704 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41971,1689516918385-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:23,704 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:23,706 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:41971, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:23,706 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:23,708 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:23,717 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 14:15:23,724 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:23,727 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c empty. 2023-07-16 14:15:23,727 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:23,728 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 14:15:23,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 14:15:23,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:23,744 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 14:15:23,748 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:23,754 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:23,769 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:23,772 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 empty. 2023-07-16 14:15:23,775 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:23,775 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 14:15:23,779 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:23,782 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 701c50185fdc12fe0464bfa3b96e779c, NAME => 'hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:23,819 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:23,821 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:23,822 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 701c50185fdc12fe0464bfa3b96e779c, disabling compactions & flushes 2023-07-16 14:15:23,822 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => bb99c7296a6419e19ffe990276a43f38, NAME => 'hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:23,822 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:23,822 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:23,822 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. after waiting 0 ms 2023-07-16 14:15:23,822 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:23,822 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:23,822 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 701c50185fdc12fe0464bfa3b96e779c: 2023-07-16 14:15:23,830 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:23,856 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:23,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing bb99c7296a6419e19ffe990276a43f38, disabling compactions & flushes 2023-07-16 14:15:23,857 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:23,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:23,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. after waiting 0 ms 2023-07-16 14:15:23,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:23,857 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:23,857 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:23,862 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:23,863 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516923833"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516923833"}]},"ts":"1689516923833"} 2023-07-16 14:15:23,863 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516923863"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516923863"}]},"ts":"1689516923863"} 2023-07-16 14:15:23,904 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:23,907 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:23,909 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:23,911 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:23,917 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516923909"}]},"ts":"1689516923909"} 2023-07-16 14:15:23,917 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516923912"}]},"ts":"1689516923912"} 2023-07-16 14:15:23,923 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 14:15:23,926 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 14:15:23,929 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:23,930 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:23,930 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:23,930 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:23,930 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:23,932 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, ASSIGN}] 2023-07-16 14:15:23,937 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, ASSIGN 2023-07-16 14:15:23,938 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:23,938 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:23,938 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:23,938 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:23,938 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:23,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, ASSIGN}] 2023-07-16 14:15:23,939 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:23,941 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, ASSIGN 2023-07-16 14:15:23,943 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:23,944 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 14:15:23,946 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:23,947 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=701c50185fdc12fe0464bfa3b96e779c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:23,947 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516923946"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516923946"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516923946"}]},"ts":"1689516923946"} 2023-07-16 14:15:23,947 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516923947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516923947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516923947"}]},"ts":"1689516923947"} 2023-07-16 14:15:23,957 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:23,962 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:24,113 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:24,113 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:24,118 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49888, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:24,124 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:24,124 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 701c50185fdc12fe0464bfa3b96e779c, NAME => 'hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:24,124 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:24,124 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:24,125 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. service=MultiRowMutationService 2023-07-16 14:15:24,125 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bb99c7296a6419e19ffe990276a43f38, NAME => 'hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:24,126 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 14:15:24,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,126 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:24,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:24,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,127 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,131 INFO [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,131 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,134 DEBUG [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m 2023-07-16 14:15:24,134 DEBUG [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m 2023-07-16 14:15:24,134 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info 2023-07-16 14:15:24,134 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info 2023-07-16 14:15:24,134 INFO [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 701c50185fdc12fe0464bfa3b96e779c columnFamilyName m 2023-07-16 14:15:24,135 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bb99c7296a6419e19ffe990276a43f38 columnFamilyName info 2023-07-16 14:15:24,135 INFO [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] regionserver.HStore(310): Store=701c50185fdc12fe0464bfa3b96e779c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:24,136 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(310): Store=bb99c7296a6419e19ffe990276a43f38/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:24,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,138 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,139 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,143 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:24,144 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:24,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:24,148 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:24,149 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 701c50185fdc12fe0464bfa3b96e779c; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@23874028, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:24,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 701c50185fdc12fe0464bfa3b96e779c: 2023-07-16 14:15:24,149 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bb99c7296a6419e19ffe990276a43f38; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9916090560, jitterRate=-0.07649210095405579}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:24,149 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:24,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c., pid=9, masterSystemTime=1689516924117 2023-07-16 14:15:24,158 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38., pid=8, masterSystemTime=1689516924113 2023-07-16 14:15:24,163 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:24,163 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:24,164 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:24,165 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:24,165 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=701c50185fdc12fe0464bfa3b96e779c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:24,165 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516924164"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516924164"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516924164"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516924164"}]},"ts":"1689516924164"} 2023-07-16 14:15:24,166 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:24,166 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516924166"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516924166"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516924166"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516924166"}]},"ts":"1689516924166"} 2023-07-16 14:15:24,173 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 14:15:24,174 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,34921,1689516920700 in 207 msec 2023-07-16 14:15:24,176 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 14:15:24,177 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,41933,1689516920766 in 214 msec 2023-07-16 14:15:24,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=4 2023-07-16 14:15:24,179 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, ASSIGN in 236 msec 2023-07-16 14:15:24,181 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:24,181 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516924181"}]},"ts":"1689516924181"} 2023-07-16 14:15:24,182 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-16 14:15:24,182 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, ASSIGN in 245 msec 2023-07-16 14:15:24,184 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:24,184 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 14:15:24,184 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516924184"}]},"ts":"1689516924184"} 2023-07-16 14:15:24,187 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 14:15:24,195 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:24,197 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:24,207 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 456 msec 2023-07-16 14:15:24,207 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 565 msec 2023-07-16 14:15:24,248 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 14:15:24,250 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:24,250 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:24,277 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 14:15:24,277 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 14:15:24,279 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:24,281 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49896, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:24,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 14:15:24,328 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:24,335 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 47 msec 2023-07-16 14:15:24,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 14:15:24,365 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:24,372 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 25 msec 2023-07-16 14:15:24,375 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:24,375 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:24,378 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:24,385 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 14:15:24,387 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 14:15:24,391 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 14:15:24,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.508sec 2023-07-16 14:15:24,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 14:15:24,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 14:15:24,396 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 14:15:24,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41971,1689516918385-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 14:15:24,399 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41971,1689516918385-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 14:15:24,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 14:15:24,469 DEBUG [Listener at localhost/36419] zookeeper.ReadOnlyZKClient(139): Connect 0x47d8ccec to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:24,474 DEBUG [Listener at localhost/36419] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37ac0919, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:24,492 DEBUG [hconnection-0x5a7dbe2d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:24,509 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59574, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:24,520 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:24,522 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:24,533 DEBUG [Listener at localhost/36419] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 14:15:24,537 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59606, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 14:15:24,556 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 14:15:24,556 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:24,557 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 14:15:24,563 DEBUG [Listener at localhost/36419] zookeeper.ReadOnlyZKClient(139): Connect 0x61cb1bf4 to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:24,570 DEBUG [Listener at localhost/36419] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3f04a498, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:24,570 INFO [Listener at localhost/36419] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63627 2023-07-16 14:15:24,580 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:24,584 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016e7cc586000a connected 2023-07-16 14:15:24,610 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=682, MaxFileDescriptor=60000, SystemLoadAverage=459, ProcessCount=176, AvailableMemoryMB=3195 2023-07-16 14:15:24,613 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-16 14:15:24,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:24,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:24,688 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 14:15:24,704 INFO [Listener at localhost/36419] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:24,705 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:24,705 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:24,705 INFO [Listener at localhost/36419] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:24,705 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:24,705 INFO [Listener at localhost/36419] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:24,705 INFO [Listener at localhost/36419] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:24,709 INFO [Listener at localhost/36419] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44287 2023-07-16 14:15:24,710 INFO [Listener at localhost/36419] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:24,714 DEBUG [Listener at localhost/36419] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:24,716 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:24,720 INFO [Listener at localhost/36419] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:24,723 INFO [Listener at localhost/36419] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44287 connecting to ZooKeeper ensemble=127.0.0.1:63627 2023-07-16 14:15:24,726 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:442870x0, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:24,728 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44287-0x1016e7cc586000b connected 2023-07-16 14:15:24,728 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:24,729 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 14:15:24,730 DEBUG [Listener at localhost/36419] zookeeper.ZKUtil(164): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:24,731 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44287 2023-07-16 14:15:24,734 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44287 2023-07-16 14:15:24,740 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44287 2023-07-16 14:15:24,740 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44287 2023-07-16 14:15:24,740 DEBUG [Listener at localhost/36419] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44287 2023-07-16 14:15:24,743 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:24,743 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:24,743 INFO [Listener at localhost/36419] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:24,744 INFO [Listener at localhost/36419] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:24,744 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:24,744 INFO [Listener at localhost/36419] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:24,744 INFO [Listener at localhost/36419] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:24,745 INFO [Listener at localhost/36419] http.HttpServer(1146): Jetty bound to port 45101 2023-07-16 14:15:24,745 INFO [Listener at localhost/36419] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:24,748 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:24,748 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@623affc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:24,749 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:24,749 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1755bd06{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:24,759 INFO [Listener at localhost/36419] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:24,760 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:24,760 INFO [Listener at localhost/36419] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:24,760 INFO [Listener at localhost/36419] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 14:15:24,783 INFO [Listener at localhost/36419] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:24,784 INFO [Listener at localhost/36419] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@78c1ca58{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:24,788 INFO [Listener at localhost/36419] server.AbstractConnector(333): Started ServerConnector@77e30ea5{HTTP/1.1, (http/1.1)}{0.0.0.0:45101} 2023-07-16 14:15:24,788 INFO [Listener at localhost/36419] server.Server(415): Started @12377ms 2023-07-16 14:15:24,797 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(951): ClusterId : ca8f0aed-5757-4c7c-b261-87885dcb06d4 2023-07-16 14:15:24,797 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:24,811 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:24,811 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:24,813 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:24,815 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ReadOnlyZKClient(139): Connect 0x02024450 to 127.0.0.1:63627 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:24,837 DEBUG [RS:3;jenkins-hbase4:44287] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cc5aa60, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:24,837 DEBUG [RS:3;jenkins-hbase4:44287] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8fa2f20, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:24,846 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44287 2023-07-16 14:15:24,846 INFO [RS:3;jenkins-hbase4:44287] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:24,846 INFO [RS:3;jenkins-hbase4:44287] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:24,846 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:24,847 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,41971,1689516918385 with isa=jenkins-hbase4.apache.org/172.31.14.131:44287, startcode=1689516924704 2023-07-16 14:15:24,847 DEBUG [RS:3;jenkins-hbase4:44287] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:24,851 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38171, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:24,852 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41971] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,852 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:24,852 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1 2023-07-16 14:15:24,853 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:42609 2023-07-16 14:15:24,853 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44773 2023-07-16 14:15:24,858 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:24,858 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:24,858 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:24,858 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:24,858 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:24,859 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:24,859 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44287,1689516924704] 2023-07-16 14:15:24,859 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:24,859 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:24,859 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,860 WARN [RS:3;jenkins-hbase4:44287] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:24,860 INFO [RS:3;jenkins-hbase4:44287] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:24,860 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1948): logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:24,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:24,866 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,41971,1689516918385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 14:15:24,866 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:24,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:24,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,867 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:24,868 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:24,869 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:24,872 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:24,872 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:24,872 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,873 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ZKUtil(162): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:24,874 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:24,875 INFO [RS:3;jenkins-hbase4:44287] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:24,878 INFO [RS:3;jenkins-hbase4:44287] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:24,879 INFO [RS:3;jenkins-hbase4:44287] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:24,879 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:24,882 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:24,884 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,885 DEBUG [RS:3;jenkins-hbase4:44287] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:24,887 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:24,887 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:24,887 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:24,900 INFO [RS:3;jenkins-hbase4:44287] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:24,900 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44287,1689516924704-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:24,914 INFO [RS:3;jenkins-hbase4:44287] regionserver.Replication(203): jenkins-hbase4.apache.org,44287,1689516924704 started 2023-07-16 14:15:24,915 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44287,1689516924704, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44287, sessionid=0x1016e7cc586000b 2023-07-16 14:15:24,915 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:24,915 DEBUG [RS:3;jenkins-hbase4:44287] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,915 DEBUG [RS:3;jenkins-hbase4:44287] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44287,1689516924704' 2023-07-16 14:15:24,915 DEBUG [RS:3;jenkins-hbase4:44287] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:24,915 DEBUG [RS:3;jenkins-hbase4:44287] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:24,916 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:24,916 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:24,916 DEBUG [RS:3;jenkins-hbase4:44287] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:24,916 DEBUG [RS:3;jenkins-hbase4:44287] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44287,1689516924704' 2023-07-16 14:15:24,916 DEBUG [RS:3;jenkins-hbase4:44287] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:24,916 DEBUG [RS:3;jenkins-hbase4:44287] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:24,917 DEBUG [RS:3;jenkins-hbase4:44287] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:24,917 INFO [RS:3;jenkins-hbase4:44287] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:24,917 INFO [RS:3;jenkins-hbase4:44287] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:24,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:24,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:24,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:24,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:24,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:24,935 DEBUG [hconnection-0x4cec13a7-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:24,940 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59584, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:24,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:24,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:24,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:24,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:24,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:59606 deadline: 1689518124959, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:24,961 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:24,963 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:24,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:24,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:24,965 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:24,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:24,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:24,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:24,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:24,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:24,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:24,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:24,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:24,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:24,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:24,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:24,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:24,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:24,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:24,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:24,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:24,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:25,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(238): Moving server region 701c50185fdc12fe0464bfa3b96e779c, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:25,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:25,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:25,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:25,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:25,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:25,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, REOPEN/MOVE 2023-07-16 14:15:25,007 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, REOPEN/MOVE 2023-07-16 14:15:25,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:25,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:25,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:25,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:25,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:25,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:25,008 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=701c50185fdc12fe0464bfa3b96e779c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:25,009 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516925008"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516925008"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516925008"}]},"ts":"1689516925008"} 2023-07-16 14:15:25,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 14:15:25,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(238): Moving server region bb99c7296a6419e19ffe990276a43f38, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:25,010 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 14:15:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:25,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:25,011 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34921,1689516920700, state=CLOSING 2023-07-16 14:15:25,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE 2023-07-16 14:15:25,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-16 14:15:25,013 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE 2023-07-16 14:15:25,013 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:25,014 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 14:15:25,014 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:25,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:25,020 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:25,020 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516925019"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516925019"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516925019"}]},"ts":"1689516925019"} 2023-07-16 14:15:25,021 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=12, state=RUNNABLE; CloseRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:25,022 INFO [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44287%2C1689516924704, suffix=, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,44287,1689516924704, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs, maxLogs=32 2023-07-16 14:15:25,024 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; CloseRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:25,026 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=17, ppid=14, state=RUNNABLE; CloseRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:25,046 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:25,046 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:25,046 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:25,054 INFO [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,44287,1689516924704/jenkins-hbase4.apache.org%2C44287%2C1689516924704.1689516925023 2023-07-16 14:15:25,054 DEBUG [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK], DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK], DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK]] 2023-07-16 14:15:25,177 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-16 14:15:25,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:25,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:25,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:25,179 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:25,179 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:25,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=3.21 KB heapSize=6.16 KB 2023-07-16 14:15:25,274 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.03 KB at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/info/750999ffce7f4f73b6e1f731ed6d2c06 2023-07-16 14:15:25,350 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=16 (bloomFilter=false), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/table/17a5243c31cd4163ad06df1759a21f04 2023-07-16 14:15:25,360 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/info/750999ffce7f4f73b6e1f731ed6d2c06 as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info/750999ffce7f4f73b6e1f731ed6d2c06 2023-07-16 14:15:25,372 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info/750999ffce7f4f73b6e1f731ed6d2c06, entries=22, sequenceid=16, filesize=7.3 K 2023-07-16 14:15:25,376 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/table/17a5243c31cd4163ad06df1759a21f04 as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table/17a5243c31cd4163ad06df1759a21f04 2023-07-16 14:15:25,387 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table/17a5243c31cd4163ad06df1759a21f04, entries=4, sequenceid=16, filesize=4.8 K 2023-07-16 14:15:25,390 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~3.21 KB/3290, heapSize ~5.88 KB/6024, currentSize=0 B/0 for 1588230740 in 210ms, sequenceid=16, compaction requested=false 2023-07-16 14:15:25,391 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 14:15:25,413 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/recovered.edits/19.seqid, newMaxSeqId=19, maxSeqId=1 2023-07-16 14:15:25,415 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:25,415 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:25,415 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:25,415 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44287,1689516924704 record at close sequenceid=16 2023-07-16 14:15:25,419 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-16 14:15:25,422 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-16 14:15:25,434 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-16 14:15:25,434 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34921,1689516920700 in 404 msec 2023-07-16 14:15:25,436 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:25,586 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:25,587 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44287,1689516924704, state=OPENING 2023-07-16 14:15:25,589 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 14:15:25,589 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:25,589 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:25,744 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:25,744 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:25,748 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38030, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:25,754 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 14:15:25,754 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:25,757 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44287%2C1689516924704.meta, suffix=.meta, logDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,44287,1689516924704, archiveDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs, maxLogs=32 2023-07-16 14:15:25,790 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK] 2023-07-16 14:15:25,804 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK] 2023-07-16 14:15:25,808 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK] 2023-07-16 14:15:25,814 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/WALs/jenkins-hbase4.apache.org,44287,1689516924704/jenkins-hbase4.apache.org%2C44287%2C1689516924704.meta.1689516925759.meta 2023-07-16 14:15:25,817 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42869,DS-b2b4229c-71db-43a2-ad3d-ead4729d004b,DISK], DatanodeInfoWithStorage[127.0.0.1:34829,DS-295e8c93-396b-49b2-b552-da44a87ff94f,DISK], DatanodeInfoWithStorage[127.0.0.1:40055,DS-4e8e9f75-8ccb-4414-adbc-e1077f8f33e3,DISK]] 2023-07-16 14:15:25,817 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:25,817 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:25,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 14:15:25,818 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 14:15:25,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 14:15:25,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:25,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 14:15:25,818 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 14:15:25,824 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:25,826 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info 2023-07-16 14:15:25,826 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info 2023-07-16 14:15:25,827 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:25,846 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info/750999ffce7f4f73b6e1f731ed6d2c06 2023-07-16 14:15:25,847 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:25,847 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:25,849 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:25,849 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:25,850 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:25,851 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:25,851 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:25,853 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table 2023-07-16 14:15:25,853 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table 2023-07-16 14:15:25,853 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:25,878 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table/17a5243c31cd4163ad06df1759a21f04 2023-07-16 14:15:25,878 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:25,880 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740 2023-07-16 14:15:25,883 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740 2023-07-16 14:15:25,887 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:25,889 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:25,891 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=20; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10393066080, jitterRate=-0.03207029402256012}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:25,891 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:25,892 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=18, masterSystemTime=1689516925744 2023-07-16 14:15:25,897 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 14:15:25,898 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 14:15:25,898 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44287,1689516924704, state=OPEN 2023-07-16 14:15:25,900 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 14:15:25,900 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:25,905 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=13 2023-07-16 14:15:25,905 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44287,1689516924704 in 311 msec 2023-07-16 14:15:25,907 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 897 msec 2023-07-16 14:15:26,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-16 14:15:26,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bb99c7296a6419e19ffe990276a43f38, disabling compactions & flushes 2023-07-16 14:15:26,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 701c50185fdc12fe0464bfa3b96e779c, disabling compactions & flushes 2023-07-16 14:15:26,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. after waiting 0 ms 2023-07-16 14:15:26,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. after waiting 0 ms 2023-07-16 14:15:26,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing bb99c7296a6419e19ffe990276a43f38 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-16 14:15:26,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 701c50185fdc12fe0464bfa3b96e779c 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-16 14:15:26,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/.tmp/m/d7cb8f8ae13442c8928fdb787b589b68 2023-07-16 14:15:26,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/.tmp/info/31a880b9adb041f4a9f7745816e07fde 2023-07-16 14:15:26,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/.tmp/m/d7cb8f8ae13442c8928fdb787b589b68 as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m/d7cb8f8ae13442c8928fdb787b589b68 2023-07-16 14:15:26,163 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/.tmp/info/31a880b9adb041f4a9f7745816e07fde as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info/31a880b9adb041f4a9f7745816e07fde 2023-07-16 14:15:26,171 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m/d7cb8f8ae13442c8928fdb787b589b68, entries=3, sequenceid=9, filesize=5.2 K 2023-07-16 14:15:26,177 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info/31a880b9adb041f4a9f7745816e07fde, entries=2, sequenceid=6, filesize=4.8 K 2023-07-16 14:15:26,177 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for 701c50185fdc12fe0464bfa3b96e779c in 120ms, sequenceid=9, compaction requested=false 2023-07-16 14:15:26,178 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 14:15:26,179 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for bb99c7296a6419e19ffe990276a43f38 in 122ms, sequenceid=6, compaction requested=false 2023-07-16 14:15:26,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 14:15:26,201 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 14:15:26,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:26,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-16 14:15:26,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,203 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 701c50185fdc12fe0464bfa3b96e779c: 2023-07-16 14:15:26,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 701c50185fdc12fe0464bfa3b96e779c move to jenkins-hbase4.apache.org,44287,1689516924704 record at close sequenceid=9 2023-07-16 14:15:26,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:26,204 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bb99c7296a6419e19ffe990276a43f38 move to jenkins-hbase4.apache.org,43741,1689516920562 record at close sequenceid=6 2023-07-16 14:15:26,207 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,211 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=701c50185fdc12fe0464bfa3b96e779c, regionState=CLOSED 2023-07-16 14:15:26,212 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516926211"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516926211"}]},"ts":"1689516926211"} 2023-07-16 14:15:26,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,213 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=CLOSED 2023-07-16 14:15:26,213 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34921] ipc.CallRunner(144): callId: 42 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:59564 deadline: 1689516986212, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=16. 2023-07-16 14:15:26,213 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516926213"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516926213"}]},"ts":"1689516926213"} 2023-07-16 14:15:26,213 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34921] ipc.CallRunner(144): callId: 43 service: ClientService methodName: Mutate size: 217 connection: 172.31.14.131:59564 deadline: 1689516986213, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=16. 2023-07-16 14:15:26,317 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:26,318 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38042, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:26,325 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-16 14:15:26,326 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; CloseRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,41933,1689516920766 in 1.2980 sec 2023-07-16 14:15:26,327 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=14, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:26,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=12 2023-07-16 14:15:26,327 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; CloseRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,34921,1689516920700 in 1.3110 sec 2023-07-16 14:15:26,328 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:26,328 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 14:15:26,329 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:26,329 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516926329"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516926329"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516926329"}]},"ts":"1689516926329"} 2023-07-16 14:15:26,330 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=701c50185fdc12fe0464bfa3b96e779c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:26,330 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516926329"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516926329"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516926329"}]},"ts":"1689516926329"} 2023-07-16 14:15:26,331 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=14, state=RUNNABLE; OpenRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:26,333 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=12, state=RUNNABLE; OpenRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:26,486 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:26,486 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:26,490 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41936, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:26,494 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,494 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 701c50185fdc12fe0464bfa3b96e779c, NAME => 'hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:26,494 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:26,494 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. service=MultiRowMutationService 2023-07-16 14:15:26,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bb99c7296a6419e19ffe990276a43f38, NAME => 'hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,497 INFO [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,499 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,499 DEBUG [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m 2023-07-16 14:15:26,500 DEBUG [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m 2023-07-16 14:15:26,500 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info 2023-07-16 14:15:26,500 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info 2023-07-16 14:15:26,500 INFO [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 701c50185fdc12fe0464bfa3b96e779c columnFamilyName m 2023-07-16 14:15:26,501 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bb99c7296a6419e19ffe990276a43f38 columnFamilyName info 2023-07-16 14:15:26,511 DEBUG [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] regionserver.HStore(539): loaded hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m/d7cb8f8ae13442c8928fdb787b589b68 2023-07-16 14:15:26,513 INFO [StoreOpener-701c50185fdc12fe0464bfa3b96e779c-1] regionserver.HStore(310): Store=701c50185fdc12fe0464bfa3b96e779c/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:26,514 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(539): loaded hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info/31a880b9adb041f4a9f7745816e07fde 2023-07-16 14:15:26,517 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(310): Store=bb99c7296a6419e19ffe990276a43f38/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:26,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:26,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:26,526 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 701c50185fdc12fe0464bfa3b96e779c; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@18f286ce, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:26,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 701c50185fdc12fe0464bfa3b96e779c: 2023-07-16 14:15:26,526 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bb99c7296a6419e19ffe990276a43f38; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10353748640, jitterRate=-0.03573201596736908}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:26,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:26,527 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38., pid=19, masterSystemTime=1689516926486 2023-07-16 14:15:26,530 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c., pid=20, masterSystemTime=1689516926487 2023-07-16 14:15:26,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,534 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:26,535 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:26,536 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516926535"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516926535"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516926535"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516926535"}]},"ts":"1689516926535"} 2023-07-16 14:15:26,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,536 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:26,537 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=701c50185fdc12fe0464bfa3b96e779c, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:26,538 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516926537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516926537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516926537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516926537"}]},"ts":"1689516926537"} 2023-07-16 14:15:26,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=14 2023-07-16 14:15:26,545 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=12 2023-07-16 14:15:26,545 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=14, state=SUCCESS; OpenRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,43741,1689516920562 in 208 msec 2023-07-16 14:15:26,546 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=12, state=SUCCESS; OpenRegionProcedure 701c50185fdc12fe0464bfa3b96e779c, server=jenkins-hbase4.apache.org,44287,1689516924704 in 208 msec 2023-07-16 14:15:26,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE in 1.5350 sec 2023-07-16 14:15:26,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=701c50185fdc12fe0464bfa3b96e779c, REOPEN/MOVE in 1.5410 sec 2023-07-16 14:15:27,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to default 2023-07-16 14:15:27,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:27,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:27,017 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34921] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:59584 deadline: 1689516987016, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=9. 2023-07-16 14:15:27,121 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34921] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:59584 deadline: 1689516987121, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=16. 2023-07-16 14:15:27,224 DEBUG [hconnection-0x4cec13a7-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:27,228 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38050, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:27,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:27,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:27,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:27,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:27,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:27,271 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:27,273 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:27,275 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34921] ipc.CallRunner(144): callId: 52 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:59564 deadline: 1689516987275, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=9. 2023-07-16 14:15:27,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 21 2023-07-16 14:15:27,285 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:27,382 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:27,383 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:27,383 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:27,384 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:27,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:27,394 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:27,401 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:27,401 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:27,402 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:27,402 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:27,402 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:27,402 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 empty. 2023-07-16 14:15:27,403 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f empty. 2023-07-16 14:15:27,403 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 empty. 2023-07-16 14:15:27,403 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 empty. 2023-07-16 14:15:27,406 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:27,407 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 empty. 2023-07-16 14:15:27,408 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:27,408 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:27,408 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:27,408 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:27,408 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 14:15:27,443 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:27,446 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 63823929c4c50daaf883cb008c86fd59, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:27,447 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => b69b0f2b7ae79d3665e5b3ec10846151, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:27,447 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => e3d99ab113cfa7396d5a9aa54612b984, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:27,492 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,492 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,493 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing e3d99ab113cfa7396d5a9aa54612b984, disabling compactions & flushes 2023-07-16 14:15:27,493 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing b69b0f2b7ae79d3665e5b3ec10846151, disabling compactions & flushes 2023-07-16 14:15:27,493 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:27,493 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. after waiting 0 ms 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:27,494 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. after waiting 0 ms 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for e3d99ab113cfa7396d5a9aa54612b984: 2023-07-16 14:15:27,494 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:27,494 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for b69b0f2b7ae79d3665e5b3ec10846151: 2023-07-16 14:15:27,495 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 674a175db354ee25768c8387797d8c2f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:27,495 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 668f8446053954f71dc066588196c5a6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:27,496 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 63823929c4c50daaf883cb008c86fd59, disabling compactions & flushes 2023-07-16 14:15:27,497 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:27,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:27,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. after waiting 0 ms 2023-07-16 14:15:27,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:27,497 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:27,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 63823929c4c50daaf883cb008c86fd59: 2023-07-16 14:15:27,528 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,529 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,536 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 674a175db354ee25768c8387797d8c2f, disabling compactions & flushes 2023-07-16 14:15:27,536 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 668f8446053954f71dc066588196c5a6, disabling compactions & flushes 2023-07-16 14:15:27,536 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:27,537 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. after waiting 0 ms 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. after waiting 0 ms 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:27,537 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:27,537 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 674a175db354ee25768c8387797d8c2f: 2023-07-16 14:15:27,537 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 668f8446053954f71dc066588196c5a6: 2023-07-16 14:15:27,545 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:27,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516927546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516927546"}]},"ts":"1689516927546"} 2023-07-16 14:15:27,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516927546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516927546"}]},"ts":"1689516927546"} 2023-07-16 14:15:27,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516927546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516927546"}]},"ts":"1689516927546"} 2023-07-16 14:15:27,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516927546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516927546"}]},"ts":"1689516927546"} 2023-07-16 14:15:27,548 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516927546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516927546"}]},"ts":"1689516927546"} 2023-07-16 14:15:27,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:27,621 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 14:15:27,623 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:27,623 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516927623"}]},"ts":"1689516927623"} 2023-07-16 14:15:27,626 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 14:15:27,636 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:27,637 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:27,637 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:27,637 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:27,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, ASSIGN}, {pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, ASSIGN}, {pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, ASSIGN}, {pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, ASSIGN}, {pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, ASSIGN}] 2023-07-16 14:15:27,641 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, ASSIGN 2023-07-16 14:15:27,641 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, ASSIGN 2023-07-16 14:15:27,643 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, ASSIGN 2023-07-16 14:15:27,644 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, ASSIGN 2023-07-16 14:15:27,644 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:27,646 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=24, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:27,647 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:27,647 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, ASSIGN 2023-07-16 14:15:27,647 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=25, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:27,652 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=26, ppid=21, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:27,794 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 14:15:27,804 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:27,804 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:27,805 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516927804"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516927804"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516927804"}]},"ts":"1689516927804"} 2023-07-16 14:15:27,805 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516927804"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516927804"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516927804"}]},"ts":"1689516927804"} 2023-07-16 14:15:27,804 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:27,805 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:27,805 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516927804"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516927804"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516927804"}]},"ts":"1689516927804"} 2023-07-16 14:15:27,806 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516927805"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516927805"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516927805"}]},"ts":"1689516927805"} 2023-07-16 14:15:27,807 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:27,807 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516927807"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516927807"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516927807"}]},"ts":"1689516927807"} 2023-07-16 14:15:27,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=23, state=RUNNABLE; OpenRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:27,810 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=24, state=RUNNABLE; OpenRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:27,813 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=22, state=RUNNABLE; OpenRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:27,818 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; OpenRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:27,818 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=26, state=RUNNABLE; OpenRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:27,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:27,975 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:27,975 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674a175db354ee25768c8387797d8c2f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 14:15:27,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:27,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:27,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:27,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:27,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 668f8446053954f71dc066588196c5a6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 14:15:27,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:27,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:27,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:27,980 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:27,982 INFO [StoreOpener-674a175db354ee25768c8387797d8c2f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:27,991 INFO [StoreOpener-668f8446053954f71dc066588196c5a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:27,997 DEBUG [StoreOpener-674a175db354ee25768c8387797d8c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/f 2023-07-16 14:15:27,997 DEBUG [StoreOpener-674a175db354ee25768c8387797d8c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/f 2023-07-16 14:15:27,998 INFO [StoreOpener-674a175db354ee25768c8387797d8c2f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674a175db354ee25768c8387797d8c2f columnFamilyName f 2023-07-16 14:15:27,998 DEBUG [StoreOpener-668f8446053954f71dc066588196c5a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/f 2023-07-16 14:15:27,998 DEBUG [StoreOpener-668f8446053954f71dc066588196c5a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/f 2023-07-16 14:15:27,998 INFO [StoreOpener-674a175db354ee25768c8387797d8c2f-1] regionserver.HStore(310): Store=674a175db354ee25768c8387797d8c2f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:28,001 INFO [StoreOpener-668f8446053954f71dc066588196c5a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 668f8446053954f71dc066588196c5a6 columnFamilyName f 2023-07-16 14:15:28,003 INFO [StoreOpener-668f8446053954f71dc066588196c5a6-1] regionserver.HStore(310): Store=668f8446053954f71dc066588196c5a6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:28,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:28,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:28,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:28,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:28,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:28,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:28,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:28,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:28,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 668f8446053954f71dc066588196c5a6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10950857760, jitterRate=0.019878104329109192}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:28,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 668f8446053954f71dc066588196c5a6: 2023-07-16 14:15:28,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674a175db354ee25768c8387797d8c2f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11276972480, jitterRate=0.05024990439414978}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:28,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674a175db354ee25768c8387797d8c2f: 2023-07-16 14:15:28,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6., pid=31, masterSystemTime=1689516927965 2023-07-16 14:15:28,037 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f., pid=30, masterSystemTime=1689516927964 2023-07-16 14:15:28,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:28,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:28,041 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3d99ab113cfa7396d5a9aa54612b984, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 14:15:28,041 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:28,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,042 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:28,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:28,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:28,042 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516928042"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516928042"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516928042"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516928042"}]},"ts":"1689516928042"} 2023-07-16 14:15:28,042 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,042 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b69b0f2b7ae79d3665e5b3ec10846151, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 14:15:28,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:28,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,045 INFO [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,051 INFO [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,052 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, ASSIGN in 412 msec 2023-07-16 14:15:28,049 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=26 2023-07-16 14:15:28,047 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:28,053 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=26, state=SUCCESS; OpenRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,43741,1689516920562 in 227 msec 2023-07-16 14:15:28,053 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928047"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516928047"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516928047"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516928047"}]},"ts":"1689516928047"} 2023-07-16 14:15:28,056 DEBUG [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/f 2023-07-16 14:15:28,056 DEBUG [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/f 2023-07-16 14:15:28,057 INFO [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b69b0f2b7ae79d3665e5b3ec10846151 columnFamilyName f 2023-07-16 14:15:28,057 DEBUG [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/f 2023-07-16 14:15:28,058 DEBUG [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/f 2023-07-16 14:15:28,059 INFO [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3d99ab113cfa7396d5a9aa54612b984 columnFamilyName f 2023-07-16 14:15:28,060 INFO [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] regionserver.HStore(310): Store=b69b0f2b7ae79d3665e5b3ec10846151/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:28,061 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-16 14:15:28,062 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; OpenRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,44287,1689516924704 in 238 msec 2023-07-16 14:15:28,062 INFO [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] regionserver.HStore(310): Store=e3d99ab113cfa7396d5a9aa54612b984/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:28,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,069 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, ASSIGN in 425 msec 2023-07-16 14:15:28,072 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,074 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:28,079 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b69b0f2b7ae79d3665e5b3ec10846151; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11148723200, jitterRate=0.03830575942993164}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:28,079 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b69b0f2b7ae79d3665e5b3ec10846151: 2023-07-16 14:15:28,081 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151., pid=27, masterSystemTime=1689516927964 2023-07-16 14:15:28,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,083 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,085 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:28,085 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928085"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516928085"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516928085"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516928085"}]},"ts":"1689516928085"} 2023-07-16 14:15:28,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:28,086 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e3d99ab113cfa7396d5a9aa54612b984; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11267583040, jitterRate=0.049375444650650024}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:28,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e3d99ab113cfa7396d5a9aa54612b984: 2023-07-16 14:15:28,087 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984., pid=28, masterSystemTime=1689516927965 2023-07-16 14:15:28,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,092 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,092 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,093 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63823929c4c50daaf883cb008c86fd59, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 14:15:28,093 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:28,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:28,094 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928093"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516928093"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516928093"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516928093"}]},"ts":"1689516928093"} 2023-07-16 14:15:28,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,095 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,096 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=23 2023-07-16 14:15:28,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=23, state=SUCCESS; OpenRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,44287,1689516924704 in 280 msec 2023-07-16 14:15:28,100 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, ASSIGN in 460 msec 2023-07-16 14:15:28,103 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=24 2023-07-16 14:15:28,103 INFO [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,103 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=24, state=SUCCESS; OpenRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,43741,1689516920562 in 289 msec 2023-07-16 14:15:28,105 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, ASSIGN in 466 msec 2023-07-16 14:15:28,105 DEBUG [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/f 2023-07-16 14:15:28,105 DEBUG [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/f 2023-07-16 14:15:28,106 INFO [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63823929c4c50daaf883cb008c86fd59 columnFamilyName f 2023-07-16 14:15:28,107 INFO [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] regionserver.HStore(310): Store=63823929c4c50daaf883cb008c86fd59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:28,108 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,110 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:28,118 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 63823929c4c50daaf883cb008c86fd59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10878921600, jitterRate=0.013178527355194092}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:28,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 63823929c4c50daaf883cb008c86fd59: 2023-07-16 14:15:28,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59., pid=29, masterSystemTime=1689516927965 2023-07-16 14:15:28,127 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,127 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,127 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:28,128 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516928127"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516928127"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516928127"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516928127"}]},"ts":"1689516928127"} 2023-07-16 14:15:28,133 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=22 2023-07-16 14:15:28,133 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=22, state=SUCCESS; OpenRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,43741,1689516920562 in 317 msec 2023-07-16 14:15:28,139 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-16 14:15:28,140 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, ASSIGN in 496 msec 2023-07-16 14:15:28,142 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:28,142 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516928142"}]},"ts":"1689516928142"} 2023-07-16 14:15:28,145 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 14:15:28,148 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=21, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:28,152 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 879 msec 2023-07-16 14:15:28,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:28,397 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 21 completed 2023-07-16 14:15:28,397 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-16 14:15:28,398 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:28,403 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34921] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:59574 deadline: 1689516988403, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=16. 2023-07-16 14:15:28,507 DEBUG [hconnection-0x5a7dbe2d-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:28,519 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38066, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:28,539 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-16 14:15:28,541 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:28,543 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-16 14:15:28,546 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:28,558 DEBUG [Listener at localhost/36419] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:28,565 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59598, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:28,571 DEBUG [Listener at localhost/36419] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:28,581 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49900, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:28,585 DEBUG [Listener at localhost/36419] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:28,596 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41950, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:28,600 DEBUG [Listener at localhost/36419] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:28,602 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38068, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:28,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:28,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:28,615 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,624 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:28,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:28,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:28,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 63823929c4c50daaf883cb008c86fd59 to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:28,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:28,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:28,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:28,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:28,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, REOPEN/MOVE 2023-07-16 14:15:28,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region b69b0f2b7ae79d3665e5b3ec10846151 to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,638 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, REOPEN/MOVE 2023-07-16 14:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:28,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:28,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, REOPEN/MOVE 2023-07-16 14:15:28,643 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:28,643 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region e3d99ab113cfa7396d5a9aa54612b984 to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,643 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, REOPEN/MOVE 2023-07-16 14:15:28,643 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516928643"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516928643"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516928643"}]},"ts":"1689516928643"} 2023-07-16 14:15:28,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:28,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:28,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:28,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:28,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:28,645 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:28,645 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928645"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516928645"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516928645"}]},"ts":"1689516928645"} 2023-07-16 14:15:28,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, REOPEN/MOVE 2023-07-16 14:15:28,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 674a175db354ee25768c8387797d8c2f to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,647 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, REOPEN/MOVE 2023-07-16 14:15:28,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:28,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:28,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:28,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:28,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:28,649 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:28,649 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928648"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516928648"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516928648"}]},"ts":"1689516928648"} 2023-07-16 14:15:28,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, REOPEN/MOVE 2023-07-16 14:15:28,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 668f8446053954f71dc066588196c5a6 to RSGroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:28,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:28,650 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, REOPEN/MOVE 2023-07-16 14:15:28,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:28,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:28,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:28,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:28,652 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=32, state=RUNNABLE; CloseRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:28,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, REOPEN/MOVE 2023-07-16 14:15:28,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_248150470, current retry=0 2023-07-16 14:15:28,654 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:28,654 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928654"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516928654"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516928654"}]},"ts":"1689516928654"} 2023-07-16 14:15:28,654 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=33, state=RUNNABLE; CloseRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:28,656 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, REOPEN/MOVE 2023-07-16 14:15:28,657 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=34, state=RUNNABLE; CloseRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:28,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=35, state=RUNNABLE; CloseRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:28,658 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:28,658 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516928658"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516928658"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516928658"}]},"ts":"1689516928658"} 2023-07-16 14:15:28,662 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=36, state=RUNNABLE; CloseRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:28,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 63823929c4c50daaf883cb008c86fd59, disabling compactions & flushes 2023-07-16 14:15:28,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. after waiting 0 ms 2023-07-16 14:15:28,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b69b0f2b7ae79d3665e5b3ec10846151, disabling compactions & flushes 2023-07-16 14:15:28,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,817 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. after waiting 0 ms 2023-07-16 14:15:28,817 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:28,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:28,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:28,845 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b69b0f2b7ae79d3665e5b3ec10846151: 2023-07-16 14:15:28,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b69b0f2b7ae79d3665e5b3ec10846151 move to jenkins-hbase4.apache.org,41933,1689516920766 record at close sequenceid=2 2023-07-16 14:15:28,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:28,845 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 63823929c4c50daaf883cb008c86fd59: 2023-07-16 14:15:28,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 63823929c4c50daaf883cb008c86fd59 move to jenkins-hbase4.apache.org,34921,1689516920700 record at close sequenceid=2 2023-07-16 14:15:28,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:28,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:28,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674a175db354ee25768c8387797d8c2f, disabling compactions & flushes 2023-07-16 14:15:28,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:28,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:28,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. after waiting 0 ms 2023-07-16 14:15:28,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:28,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:28,860 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e3d99ab113cfa7396d5a9aa54612b984, disabling compactions & flushes 2023-07-16 14:15:28,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. after waiting 0 ms 2023-07-16 14:15:28,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,867 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=CLOSED 2023-07-16 14:15:28,867 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=CLOSED 2023-07-16 14:15:28,867 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516928867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516928867"}]},"ts":"1689516928867"} 2023-07-16 14:15:28,867 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928867"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516928867"}]},"ts":"1689516928867"} 2023-07-16 14:15:28,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=32 2023-07-16 14:15:28,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=32, state=SUCCESS; CloseRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,43741,1689516920562 in 219 msec 2023-07-16 14:15:28,876 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:28,878 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=33 2023-07-16 14:15:28,878 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=33, state=SUCCESS; CloseRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,44287,1689516924704 in 221 msec 2023-07-16 14:15:28,879 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:28,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:28,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:28,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674a175db354ee25768c8387797d8c2f: 2023-07-16 14:15:28,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 674a175db354ee25768c8387797d8c2f move to jenkins-hbase4.apache.org,34921,1689516920700 record at close sequenceid=2 2023-07-16 14:15:28,894 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:28,896 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=CLOSED 2023-07-16 14:15:28,896 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928895"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516928895"}]},"ts":"1689516928895"} 2023-07-16 14:15:28,900 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:28,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:28,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e3d99ab113cfa7396d5a9aa54612b984: 2023-07-16 14:15:28,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e3d99ab113cfa7396d5a9aa54612b984 move to jenkins-hbase4.apache.org,41933,1689516920766 record at close sequenceid=2 2023-07-16 14:15:28,905 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:28,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:28,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 668f8446053954f71dc066588196c5a6, disabling compactions & flushes 2023-07-16 14:15:28,906 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:28,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:28,906 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. after waiting 0 ms 2023-07-16 14:15:28,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:28,908 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=CLOSED 2023-07-16 14:15:28,909 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516928908"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516928908"}]},"ts":"1689516928908"} 2023-07-16 14:15:28,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=35 2023-07-16 14:15:28,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=35, state=SUCCESS; CloseRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,44287,1689516924704 in 245 msec 2023-07-16 14:15:28,912 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:28,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:28,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:28,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 668f8446053954f71dc066588196c5a6: 2023-07-16 14:15:28,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 668f8446053954f71dc066588196c5a6 move to jenkins-hbase4.apache.org,41933,1689516920766 record at close sequenceid=2 2023-07-16 14:15:28,917 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=34 2023-07-16 14:15:28,917 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=34, state=SUCCESS; CloseRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,43741,1689516920562 in 256 msec 2023-07-16 14:15:28,918 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:28,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:28,920 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=CLOSED 2023-07-16 14:15:28,920 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516928920"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516928920"}]},"ts":"1689516928920"} 2023-07-16 14:15:28,924 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=36 2023-07-16 14:15:28,925 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=36, state=SUCCESS; CloseRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,43741,1689516920562 in 260 msec 2023-07-16 14:15:28,926 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=36, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:29,027 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 14:15:29,027 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,027 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:29,028 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929027"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929027"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929027"}]},"ts":"1689516929027"} 2023-07-16 14:15:29,027 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,027 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,028 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929027"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929027"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929027"}]},"ts":"1689516929027"} 2023-07-16 14:15:29,028 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929027"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929027"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929027"}]},"ts":"1689516929027"} 2023-07-16 14:15:29,027 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:29,028 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929027"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929027"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929027"}]},"ts":"1689516929027"} 2023-07-16 14:15:29,028 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929027"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929027"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929027"}]},"ts":"1689516929027"} 2023-07-16 14:15:29,033 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=33, state=RUNNABLE; OpenRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:29,035 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=35, state=RUNNABLE; OpenRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:29,036 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=36, state=RUNNABLE; OpenRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:29,038 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=32, state=RUNNABLE; OpenRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:29,039 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=34, state=RUNNABLE; OpenRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:29,100 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 14:15:29,188 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 14:15:29,189 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-16 14:15:29,189 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:29,190 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-16 14:15:29,190 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 14:15:29,190 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-16 14:15:29,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63823929c4c50daaf883cb008c86fd59, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 14:15:29,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:29,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,213 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,219 INFO [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,220 DEBUG [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/f 2023-07-16 14:15:29,220 DEBUG [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/f 2023-07-16 14:15:29,221 INFO [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63823929c4c50daaf883cb008c86fd59 columnFamilyName f 2023-07-16 14:15:29,222 INFO [StoreOpener-63823929c4c50daaf883cb008c86fd59-1] regionserver.HStore(310): Store=63823929c4c50daaf883cb008c86fd59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:29,222 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e3d99ab113cfa7396d5a9aa54612b984, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 14:15:29,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:29,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,224 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,226 INFO [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,226 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,227 DEBUG [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/f 2023-07-16 14:15:29,227 DEBUG [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/f 2023-07-16 14:15:29,228 INFO [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e3d99ab113cfa7396d5a9aa54612b984 columnFamilyName f 2023-07-16 14:15:29,228 INFO [StoreOpener-e3d99ab113cfa7396d5a9aa54612b984-1] regionserver.HStore(310): Store=e3d99ab113cfa7396d5a9aa54612b984/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:29,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 63823929c4c50daaf883cb008c86fd59; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10288096480, jitterRate=-0.04184634983539581}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:29,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 63823929c4c50daaf883cb008c86fd59: 2023-07-16 14:15:29,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59., pid=45, masterSystemTime=1689516929195 2023-07-16 14:15:29,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,240 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,241 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 674a175db354ee25768c8387797d8c2f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 14:15:29,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,241 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:29,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:29,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,241 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929241"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516929241"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516929241"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516929241"}]},"ts":"1689516929241"} 2023-07-16 14:15:29,242 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e3d99ab113cfa7396d5a9aa54612b984; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10644470240, jitterRate=-0.00865645706653595}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:29,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e3d99ab113cfa7396d5a9aa54612b984: 2023-07-16 14:15:29,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984., pid=46, masterSystemTime=1689516929195 2023-07-16 14:15:29,247 INFO [StoreOpener-674a175db354ee25768c8387797d8c2f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,247 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=32 2023-07-16 14:15:29,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,248 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=32, state=SUCCESS; OpenRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,34921,1689516920700 in 206 msec 2023-07-16 14:15:29,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,248 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,248 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929248"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516929248"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516929248"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516929248"}]},"ts":"1689516929248"} 2023-07-16 14:15:29,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 668f8446053954f71dc066588196c5a6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 14:15:29,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:29,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,250 DEBUG [StoreOpener-674a175db354ee25768c8387797d8c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/f 2023-07-16 14:15:29,250 DEBUG [StoreOpener-674a175db354ee25768c8387797d8c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/f 2023-07-16 14:15:29,250 INFO [StoreOpener-674a175db354ee25768c8387797d8c2f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 674a175db354ee25768c8387797d8c2f columnFamilyName f 2023-07-16 14:15:29,251 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, REOPEN/MOVE in 613 msec 2023-07-16 14:15:29,252 INFO [StoreOpener-674a175db354ee25768c8387797d8c2f-1] regionserver.HStore(310): Store=674a175db354ee25768c8387797d8c2f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:29,252 INFO [StoreOpener-668f8446053954f71dc066588196c5a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,254 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=34 2023-07-16 14:15:29,255 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=34, state=SUCCESS; OpenRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,41933,1689516920766 in 212 msec 2023-07-16 14:15:29,255 DEBUG [StoreOpener-668f8446053954f71dc066588196c5a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/f 2023-07-16 14:15:29,255 DEBUG [StoreOpener-668f8446053954f71dc066588196c5a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/f 2023-07-16 14:15:29,256 INFO [StoreOpener-668f8446053954f71dc066588196c5a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 668f8446053954f71dc066588196c5a6 columnFamilyName f 2023-07-16 14:15:29,256 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,257 INFO [StoreOpener-668f8446053954f71dc066588196c5a6-1] regionserver.HStore(310): Store=668f8446053954f71dc066588196c5a6/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:29,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, REOPEN/MOVE in 610 msec 2023-07-16 14:15:29,258 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 668f8446053954f71dc066588196c5a6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11977990400, jitterRate=0.11553728580474854}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:29,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 668f8446053954f71dc066588196c5a6: 2023-07-16 14:15:29,266 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 674a175db354ee25768c8387797d8c2f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10457386080, jitterRate=-0.026080027222633362}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:29,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 674a175db354ee25768c8387797d8c2f: 2023-07-16 14:15:29,267 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f., pid=43, masterSystemTime=1689516929195 2023-07-16 14:15:29,267 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6., pid=44, masterSystemTime=1689516929195 2023-07-16 14:15:29,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,270 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,271 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:29,271 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929271"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516929271"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516929271"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516929271"}]},"ts":"1689516929271"} 2023-07-16 14:15:29,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,272 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,272 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=36 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b69b0f2b7ae79d3665e5b3ec10846151, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 14:15:29,272 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929272"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516929272"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516929272"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516929272"}]},"ts":"1689516929272"} 2023-07-16 14:15:29,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:29,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,273 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,275 INFO [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,277 DEBUG [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/f 2023-07-16 14:15:29,277 DEBUG [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/f 2023-07-16 14:15:29,278 INFO [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b69b0f2b7ae79d3665e5b3ec10846151 columnFamilyName f 2023-07-16 14:15:29,278 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=35 2023-07-16 14:15:29,278 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=35, state=SUCCESS; OpenRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,34921,1689516920700 in 239 msec 2023-07-16 14:15:29,279 INFO [StoreOpener-b69b0f2b7ae79d3665e5b3ec10846151-1] regionserver.HStore(310): Store=b69b0f2b7ae79d3665e5b3ec10846151/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:29,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=36 2023-07-16 14:15:29,279 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=36, state=SUCCESS; OpenRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,41933,1689516920766 in 239 msec 2023-07-16 14:15:29,282 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, REOPEN/MOVE in 631 msec 2023-07-16 14:15:29,282 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, REOPEN/MOVE in 628 msec 2023-07-16 14:15:29,286 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,294 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b69b0f2b7ae79d3665e5b3ec10846151; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11142034400, jitterRate=0.03768281638622284}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:29,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b69b0f2b7ae79d3665e5b3ec10846151: 2023-07-16 14:15:29,295 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151., pid=42, masterSystemTime=1689516929195 2023-07-16 14:15:29,297 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,297 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,298 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,299 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929298"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516929298"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516929298"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516929298"}]},"ts":"1689516929298"} 2023-07-16 14:15:29,304 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=33 2023-07-16 14:15:29,304 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=33, state=SUCCESS; OpenRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,41933,1689516920766 in 268 msec 2023-07-16 14:15:29,307 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, REOPEN/MOVE in 666 msec 2023-07-16 14:15:29,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=32 2023-07-16 14:15:29,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_248150470. 2023-07-16 14:15:29,654 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:29,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:29,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:29,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:29,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:29,667 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:29,677 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:29,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:29,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=47, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:29,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-16 14:15:29,707 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516929707"}]},"ts":"1689516929707"} 2023-07-16 14:15:29,709 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 14:15:29,711 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 14:15:29,716 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, UNASSIGN}, {pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, UNASSIGN}, {pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, UNASSIGN}, {pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, UNASSIGN}, {pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, UNASSIGN}] 2023-07-16 14:15:29,721 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, UNASSIGN 2023-07-16 14:15:29,721 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, UNASSIGN 2023-07-16 14:15:29,722 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, UNASSIGN 2023-07-16 14:15:29,722 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, UNASSIGN 2023-07-16 14:15:29,722 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=47, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, UNASSIGN 2023-07-16 14:15:29,723 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:29,724 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,724 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929723"}]},"ts":"1689516929723"} 2023-07-16 14:15:29,724 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929723"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929723"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929723"}]},"ts":"1689516929723"} 2023-07-16 14:15:29,725 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:29,725 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929725"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929725"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929725"}]},"ts":"1689516929725"} 2023-07-16 14:15:29,726 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,726 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929726"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929726"}]},"ts":"1689516929726"} 2023-07-16 14:15:29,726 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:29,726 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929726"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516929726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516929726"}]},"ts":"1689516929726"} 2023-07-16 14:15:29,726 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=51, state=RUNNABLE; CloseRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:29,728 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=49, state=RUNNABLE; CloseRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:29,738 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=48, state=RUNNABLE; CloseRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:29,745 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=52, state=RUNNABLE; CloseRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:29,746 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=50, state=RUNNABLE; CloseRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:29,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-16 14:15:29,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 674a175db354ee25768c8387797d8c2f, disabling compactions & flushes 2023-07-16 14:15:29,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. after waiting 0 ms 2023-07-16 14:15:29,882 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e3d99ab113cfa7396d5a9aa54612b984, disabling compactions & flushes 2023-07-16 14:15:29,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. after waiting 0 ms 2023-07-16 14:15:29,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:29,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:29,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f. 2023-07-16 14:15:29,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 674a175db354ee25768c8387797d8c2f: 2023-07-16 14:15:29,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984. 2023-07-16 14:15:29,894 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e3d99ab113cfa7396d5a9aa54612b984: 2023-07-16 14:15:29,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:29,895 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 63823929c4c50daaf883cb008c86fd59, disabling compactions & flushes 2023-07-16 14:15:29,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. after waiting 0 ms 2023-07-16 14:15:29,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,900 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=674a175db354ee25768c8387797d8c2f, regionState=CLOSED 2023-07-16 14:15:29,900 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929900"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516929900"}]},"ts":"1689516929900"} 2023-07-16 14:15:29,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:29,901 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b69b0f2b7ae79d3665e5b3ec10846151, disabling compactions & flushes 2023-07-16 14:15:29,902 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,902 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. after waiting 0 ms 2023-07-16 14:15:29,903 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,903 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=e3d99ab113cfa7396d5a9aa54612b984, regionState=CLOSED 2023-07-16 14:15:29,904 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516929903"}]},"ts":"1689516929903"} 2023-07-16 14:15:29,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:29,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59. 2023-07-16 14:15:29,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 63823929c4c50daaf883cb008c86fd59: 2023-07-16 14:15:29,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=51 2023-07-16 14:15:29,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=51, state=SUCCESS; CloseRegionProcedure 674a175db354ee25768c8387797d8c2f, server=jenkins-hbase4.apache.org,34921,1689516920700 in 179 msec 2023-07-16 14:15:29,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:29,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151. 2023-07-16 14:15:29,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b69b0f2b7ae79d3665e5b3ec10846151: 2023-07-16 14:15:29,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=50 2023-07-16 14:15:29,923 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=50, state=SUCCESS; CloseRegionProcedure e3d99ab113cfa7396d5a9aa54612b984, server=jenkins-hbase4.apache.org,41933,1689516920766 in 161 msec 2023-07-16 14:15:29,924 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:29,925 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=674a175db354ee25768c8387797d8c2f, UNASSIGN in 198 msec 2023-07-16 14:15:29,925 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=63823929c4c50daaf883cb008c86fd59, regionState=CLOSED 2023-07-16 14:15:29,925 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929925"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516929925"}]},"ts":"1689516929925"} 2023-07-16 14:15:29,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:29,925 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 668f8446053954f71dc066588196c5a6, disabling compactions & flushes 2023-07-16 14:15:29,927 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. after waiting 0 ms 2023-07-16 14:15:29,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,928 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=e3d99ab113cfa7396d5a9aa54612b984, UNASSIGN in 210 msec 2023-07-16 14:15:29,928 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=b69b0f2b7ae79d3665e5b3ec10846151, regionState=CLOSED 2023-07-16 14:15:29,928 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516929928"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516929928"}]},"ts":"1689516929928"} 2023-07-16 14:15:29,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=48 2023-07-16 14:15:29,936 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=48, state=SUCCESS; CloseRegionProcedure 63823929c4c50daaf883cb008c86fd59, server=jenkins-hbase4.apache.org,34921,1689516920700 in 192 msec 2023-07-16 14:15:29,936 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=49 2023-07-16 14:15:29,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; CloseRegionProcedure b69b0f2b7ae79d3665e5b3ec10846151, server=jenkins-hbase4.apache.org,41933,1689516920766 in 203 msec 2023-07-16 14:15:29,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:29,939 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63823929c4c50daaf883cb008c86fd59, UNASSIGN in 223 msec 2023-07-16 14:15:29,939 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b69b0f2b7ae79d3665e5b3ec10846151, UNASSIGN in 224 msec 2023-07-16 14:15:29,940 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6. 2023-07-16 14:15:29,940 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 668f8446053954f71dc066588196c5a6: 2023-07-16 14:15:29,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 668f8446053954f71dc066588196c5a6 2023-07-16 14:15:29,943 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=668f8446053954f71dc066588196c5a6, regionState=CLOSED 2023-07-16 14:15:29,944 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516929943"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516929943"}]},"ts":"1689516929943"} 2023-07-16 14:15:29,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=52 2023-07-16 14:15:29,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; CloseRegionProcedure 668f8446053954f71dc066588196c5a6, server=jenkins-hbase4.apache.org,41933,1689516920766 in 201 msec 2023-07-16 14:15:29,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=47 2023-07-16 14:15:29,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=47, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=668f8446053954f71dc066588196c5a6, UNASSIGN in 232 msec 2023-07-16 14:15:29,952 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516929952"}]},"ts":"1689516929952"} 2023-07-16 14:15:29,954 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 14:15:29,956 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 14:15:29,959 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 271 msec 2023-07-16 14:15:30,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=47 2023-07-16 14:15:30,005 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 47 completed 2023-07-16 14:15:30,007 INFO [Listener at localhost/36419] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:30,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:30,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=58, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-16 14:15:30,030 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-16 14:15:30,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 14:15:30,059 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:30,059 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:30,059 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:30,059 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:30,059 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:30,064 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/recovered.edits] 2023-07-16 14:15:30,064 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/recovered.edits] 2023-07-16 14:15:30,064 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/recovered.edits] 2023-07-16 14:15:30,064 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/recovered.edits] 2023-07-16 14:15:30,065 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/recovered.edits] 2023-07-16 14:15:30,084 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984/recovered.edits/7.seqid 2023-07-16 14:15:30,085 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/e3d99ab113cfa7396d5a9aa54612b984 2023-07-16 14:15:30,087 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6/recovered.edits/7.seqid 2023-07-16 14:15:30,088 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/668f8446053954f71dc066588196c5a6 2023-07-16 14:15:30,090 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f/recovered.edits/7.seqid 2023-07-16 14:15:30,091 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59/recovered.edits/7.seqid 2023-07-16 14:15:30,092 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/674a175db354ee25768c8387797d8c2f 2023-07-16 14:15:30,092 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63823929c4c50daaf883cb008c86fd59 2023-07-16 14:15:30,098 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151/recovered.edits/7.seqid 2023-07-16 14:15:30,099 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b69b0f2b7ae79d3665e5b3ec10846151 2023-07-16 14:15:30,099 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 14:15:30,138 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 14:15:30,141 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 14:15:30,142 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 14:15:30,143 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516930142"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:30,143 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516930142"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:30,143 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516930142"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:30,143 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516930142"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:30,143 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516930142"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:30,146 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 14:15:30,147 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 63823929c4c50daaf883cb008c86fd59, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516927266.63823929c4c50daaf883cb008c86fd59.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => b69b0f2b7ae79d3665e5b3ec10846151, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516927266.b69b0f2b7ae79d3665e5b3ec10846151.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => e3d99ab113cfa7396d5a9aa54612b984, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516927266.e3d99ab113cfa7396d5a9aa54612b984.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 674a175db354ee25768c8387797d8c2f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516927266.674a175db354ee25768c8387797d8c2f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 668f8446053954f71dc066588196c5a6, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516927266.668f8446053954f71dc066588196c5a6.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 14:15:30,147 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 14:15:30,147 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516930147"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:30,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 14:15:30,158 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 14:15:30,170 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,170 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,170 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,170 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,170 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,171 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 empty. 2023-07-16 14:15:30,171 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 empty. 2023-07-16 14:15:30,172 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa empty. 2023-07-16 14:15:30,171 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 empty. 2023-07-16 14:15:30,173 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea empty. 2023-07-16 14:15:30,173 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,173 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,173 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,173 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,174 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,174 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 14:15:30,200 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:30,202 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 4126ac014708caf85adc6612abd47a03, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:30,203 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => f606805db73ce93b97e85f616fe8aa83, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:30,203 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => b86011524f8b82dc678471d8b7a561fa, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing f606805db73ce93b97e85f616fe8aa83, disabling compactions & flushes 2023-07-16 14:15:30,248 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. after waiting 0 ms 2023-07-16 14:15:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,248 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,248 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for f606805db73ce93b97e85f616fe8aa83: 2023-07-16 14:15:30,249 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8f63c6266ae7686a9fed10d20a11faea, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:30,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,251 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 4126ac014708caf85adc6612abd47a03, disabling compactions & flushes 2023-07-16 14:15:30,252 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. after waiting 0 ms 2023-07-16 14:15:30,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,252 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 4126ac014708caf85adc6612abd47a03: 2023-07-16 14:15:30,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,252 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing b86011524f8b82dc678471d8b7a561fa, disabling compactions & flushes 2023-07-16 14:15:30,252 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 27f6cefd355bbb2ff32b1fa098a07eb0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:30,252 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. after waiting 0 ms 2023-07-16 14:15:30,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,253 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,253 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for b86011524f8b82dc678471d8b7a561fa: 2023-07-16 14:15:30,311 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,311 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8f63c6266ae7686a9fed10d20a11faea, disabling compactions & flushes 2023-07-16 14:15:30,311 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,311 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,311 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. after waiting 0 ms 2023-07-16 14:15:30,311 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,311 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,311 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8f63c6266ae7686a9fed10d20a11faea: 2023-07-16 14:15:30,330 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,330 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 27f6cefd355bbb2ff32b1fa098a07eb0, disabling compactions & flushes 2023-07-16 14:15:30,330 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. after waiting 0 ms 2023-07-16 14:15:30,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,331 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,331 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 27f6cefd355bbb2ff32b1fa098a07eb0: 2023-07-16 14:15:30,338 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516930338"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516930338"}]},"ts":"1689516930338"} 2023-07-16 14:15:30,339 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930338"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516930338"}]},"ts":"1689516930338"} 2023-07-16 14:15:30,339 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930338"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516930338"}]},"ts":"1689516930338"} 2023-07-16 14:15:30,339 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930338"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516930338"}]},"ts":"1689516930338"} 2023-07-16 14:15:30,339 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516930338"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516930338"}]},"ts":"1689516930338"} 2023-07-16 14:15:30,346 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 14:15:30,348 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516930348"}]},"ts":"1689516930348"} 2023-07-16 14:15:30,350 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 14:15:30,351 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 14:15:30,356 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:30,356 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:30,356 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:30,356 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:30,357 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, ASSIGN}, {pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, ASSIGN}, {pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, ASSIGN}, {pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, ASSIGN}, {pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, ASSIGN}] 2023-07-16 14:15:30,360 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, ASSIGN 2023-07-16 14:15:30,360 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, ASSIGN 2023-07-16 14:15:30,361 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:30,361 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, ASSIGN 2023-07-16 14:15:30,361 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, ASSIGN 2023-07-16 14:15:30,362 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:30,363 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=63, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:30,363 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=61, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:30,364 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, ASSIGN 2023-07-16 14:15:30,365 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=62, ppid=58, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:30,511 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 14:15:30,515 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=27f6cefd355bbb2ff32b1fa098a07eb0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:30,516 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516930515"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516930515"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516930515"}]},"ts":"1689516930515"} 2023-07-16 14:15:30,516 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=b86011524f8b82dc678471d8b7a561fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:30,516 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930516"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516930516"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516930516"}]},"ts":"1689516930516"} 2023-07-16 14:15:30,517 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=f606805db73ce93b97e85f616fe8aa83, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:30,517 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=8f63c6266ae7686a9fed10d20a11faea, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:30,517 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930517"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516930517"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516930517"}]},"ts":"1689516930517"} 2023-07-16 14:15:30,517 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=4126ac014708caf85adc6612abd47a03, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:30,518 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930517"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516930517"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516930517"}]},"ts":"1689516930517"} 2023-07-16 14:15:30,517 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516930517"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516930517"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516930517"}]},"ts":"1689516930517"} 2023-07-16 14:15:30,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE; OpenRegionProcedure 27f6cefd355bbb2ff32b1fa098a07eb0, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:30,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=60, state=RUNNABLE; OpenRegionProcedure b86011524f8b82dc678471d8b7a561fa, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:30,522 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=62, state=RUNNABLE; OpenRegionProcedure 8f63c6266ae7686a9fed10d20a11faea, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:30,526 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=61, state=RUNNABLE; OpenRegionProcedure 4126ac014708caf85adc6612abd47a03, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:30,531 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=59, state=RUNNABLE; OpenRegionProcedure f606805db73ce93b97e85f616fe8aa83, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:30,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 14:15:30,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f606805db73ce93b97e85f616fe8aa83, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8f63c6266ae7686a9fed10d20a11faea, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,691 INFO [StoreOpener-8f63c6266ae7686a9fed10d20a11faea-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,692 INFO [StoreOpener-f606805db73ce93b97e85f616fe8aa83-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,694 DEBUG [StoreOpener-8f63c6266ae7686a9fed10d20a11faea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/f 2023-07-16 14:15:30,694 DEBUG [StoreOpener-8f63c6266ae7686a9fed10d20a11faea-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/f 2023-07-16 14:15:30,694 INFO [StoreOpener-8f63c6266ae7686a9fed10d20a11faea-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8f63c6266ae7686a9fed10d20a11faea columnFamilyName f 2023-07-16 14:15:30,696 INFO [StoreOpener-8f63c6266ae7686a9fed10d20a11faea-1] regionserver.HStore(310): Store=8f63c6266ae7686a9fed10d20a11faea/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:30,697 DEBUG [StoreOpener-f606805db73ce93b97e85f616fe8aa83-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/f 2023-07-16 14:15:30,697 DEBUG [StoreOpener-f606805db73ce93b97e85f616fe8aa83-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/f 2023-07-16 14:15:30,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,697 INFO [StoreOpener-f606805db73ce93b97e85f616fe8aa83-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f606805db73ce93b97e85f616fe8aa83 columnFamilyName f 2023-07-16 14:15:30,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,698 INFO [StoreOpener-f606805db73ce93b97e85f616fe8aa83-1] regionserver.HStore(310): Store=f606805db73ce93b97e85f616fe8aa83/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:30,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,701 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:30,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:30,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:30,706 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8f63c6266ae7686a9fed10d20a11faea; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11524654720, jitterRate=0.07331711053848267}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:30,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8f63c6266ae7686a9fed10d20a11faea: 2023-07-16 14:15:30,707 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea., pid=66, masterSystemTime=1689516930683 2023-07-16 14:15:30,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:30,708 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f606805db73ce93b97e85f616fe8aa83; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9608977280, jitterRate=-0.10509425401687622}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:30,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f606805db73ce93b97e85f616fe8aa83: 2023-07-16 14:15:30,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,709 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:30,709 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4126ac014708caf85adc6612abd47a03, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 14:15:30,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,710 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83., pid=68, masterSystemTime=1689516930681 2023-07-16 14:15:30,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,710 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=8f63c6266ae7686a9fed10d20a11faea, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:30,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,710 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930710"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516930710"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516930710"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516930710"}]},"ts":"1689516930710"} 2023-07-16 14:15:30,712 INFO [StoreOpener-4126ac014708caf85adc6612abd47a03-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:30,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 27f6cefd355bbb2ff32b1fa098a07eb0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 14:15:30,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,714 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=f606805db73ce93b97e85f616fe8aa83, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:30,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,714 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516930714"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516930714"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516930714"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516930714"}]},"ts":"1689516930714"} 2023-07-16 14:15:30,714 DEBUG [StoreOpener-4126ac014708caf85adc6612abd47a03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/f 2023-07-16 14:15:30,714 DEBUG [StoreOpener-4126ac014708caf85adc6612abd47a03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/f 2023-07-16 14:15:30,715 INFO [StoreOpener-4126ac014708caf85adc6612abd47a03-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4126ac014708caf85adc6612abd47a03 columnFamilyName f 2023-07-16 14:15:30,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=62 2023-07-16 14:15:30,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=62, state=SUCCESS; OpenRegionProcedure 8f63c6266ae7686a9fed10d20a11faea, server=jenkins-hbase4.apache.org,34921,1689516920700 in 191 msec 2023-07-16 14:15:30,716 INFO [StoreOpener-4126ac014708caf85adc6612abd47a03-1] regionserver.HStore(310): Store=4126ac014708caf85adc6612abd47a03/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:30,717 INFO [StoreOpener-27f6cefd355bbb2ff32b1fa098a07eb0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, ASSIGN in 359 msec 2023-07-16 14:15:30,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=59 2023-07-16 14:15:30,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=59, state=SUCCESS; OpenRegionProcedure f606805db73ce93b97e85f616fe8aa83, server=jenkins-hbase4.apache.org,41933,1689516920766 in 185 msec 2023-07-16 14:15:30,721 DEBUG [StoreOpener-27f6cefd355bbb2ff32b1fa098a07eb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/f 2023-07-16 14:15:30,721 DEBUG [StoreOpener-27f6cefd355bbb2ff32b1fa098a07eb0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/f 2023-07-16 14:15:30,721 INFO [StoreOpener-27f6cefd355bbb2ff32b1fa098a07eb0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 27f6cefd355bbb2ff32b1fa098a07eb0 columnFamilyName f 2023-07-16 14:15:30,722 INFO [StoreOpener-27f6cefd355bbb2ff32b1fa098a07eb0-1] regionserver.HStore(310): Store=27f6cefd355bbb2ff32b1fa098a07eb0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:30,722 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, ASSIGN in 363 msec 2023-07-16 14:15:30,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:30,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:30,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:30,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4126ac014708caf85adc6612abd47a03; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11463518400, jitterRate=0.06762334704399109}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:30,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4126ac014708caf85adc6612abd47a03: 2023-07-16 14:15:30,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03., pid=67, masterSystemTime=1689516930683 2023-07-16 14:15:30,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:30,732 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 27f6cefd355bbb2ff32b1fa098a07eb0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10119030720, jitterRate=-0.05759182572364807}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:30,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 27f6cefd355bbb2ff32b1fa098a07eb0: 2023-07-16 14:15:30,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:30,733 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=4126ac014708caf85adc6612abd47a03, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:30,733 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930733"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516930733"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516930733"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516930733"}]},"ts":"1689516930733"} 2023-07-16 14:15:30,734 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0., pid=64, masterSystemTime=1689516930681 2023-07-16 14:15:30,736 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,736 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:30,736 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b86011524f8b82dc678471d8b7a561fa, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 14:15:30,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:30,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,739 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=27f6cefd355bbb2ff32b1fa098a07eb0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:30,739 INFO [StoreOpener-b86011524f8b82dc678471d8b7a561fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,739 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516930739"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516930739"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516930739"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516930739"}]},"ts":"1689516930739"} 2023-07-16 14:15:30,749 DEBUG [StoreOpener-b86011524f8b82dc678471d8b7a561fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/f 2023-07-16 14:15:30,749 DEBUG [StoreOpener-b86011524f8b82dc678471d8b7a561fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/f 2023-07-16 14:15:30,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=61 2023-07-16 14:15:30,750 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=61, state=SUCCESS; OpenRegionProcedure 4126ac014708caf85adc6612abd47a03, server=jenkins-hbase4.apache.org,34921,1689516920700 in 213 msec 2023-07-16 14:15:30,751 INFO [StoreOpener-b86011524f8b82dc678471d8b7a561fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b86011524f8b82dc678471d8b7a561fa columnFamilyName f 2023-07-16 14:15:30,752 INFO [StoreOpener-b86011524f8b82dc678471d8b7a561fa-1] regionserver.HStore(310): Store=b86011524f8b82dc678471d8b7a561fa/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:30,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=63 2023-07-16 14:15:30,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, ASSIGN in 393 msec 2023-07-16 14:15:30,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; OpenRegionProcedure 27f6cefd355bbb2ff32b1fa098a07eb0, server=jenkins-hbase4.apache.org,41933,1689516920766 in 224 msec 2023-07-16 14:15:30,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,754 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, ASSIGN in 396 msec 2023-07-16 14:15:30,757 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:30,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:30,763 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b86011524f8b82dc678471d8b7a561fa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9405337760, jitterRate=-0.12405966222286224}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:30,764 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b86011524f8b82dc678471d8b7a561fa: 2023-07-16 14:15:30,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa., pid=65, masterSystemTime=1689516930681 2023-07-16 14:15:30,767 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,768 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:30,769 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=b86011524f8b82dc678471d8b7a561fa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:30,769 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516930769"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516930769"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516930769"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516930769"}]},"ts":"1689516930769"} 2023-07-16 14:15:30,773 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=60 2023-07-16 14:15:30,774 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; OpenRegionProcedure b86011524f8b82dc678471d8b7a561fa, server=jenkins-hbase4.apache.org,41933,1689516920766 in 250 msec 2023-07-16 14:15:30,775 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=58 2023-07-16 14:15:30,776 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=58, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, ASSIGN in 417 msec 2023-07-16 14:15:30,776 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516930776"}]},"ts":"1689516930776"} 2023-07-16 14:15:30,777 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 14:15:30,779 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-16 14:15:30,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 762 msec 2023-07-16 14:15:31,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=58 2023-07-16 14:15:31,155 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 58 completed 2023-07-16 14:15:31,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:31,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:31,160 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=69, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-16 14:15:31,167 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516931167"}]},"ts":"1689516931167"} 2023-07-16 14:15:31,172 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 14:15:31,175 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 14:15:31,176 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, UNASSIGN}, {pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, UNASSIGN}, {pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, UNASSIGN}, {pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, UNASSIGN}, {pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, UNASSIGN}] 2023-07-16 14:15:31,179 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, UNASSIGN 2023-07-16 14:15:31,180 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, UNASSIGN 2023-07-16 14:15:31,180 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, UNASSIGN 2023-07-16 14:15:31,180 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, UNASSIGN 2023-07-16 14:15:31,180 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, ppid=69, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, UNASSIGN 2023-07-16 14:15:31,181 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=4126ac014708caf85adc6612abd47a03, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:31,181 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=27f6cefd355bbb2ff32b1fa098a07eb0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:31,181 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516931181"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516931181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516931181"}]},"ts":"1689516931181"} 2023-07-16 14:15:31,181 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516931181"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516931181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516931181"}]},"ts":"1689516931181"} 2023-07-16 14:15:31,181 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=b86011524f8b82dc678471d8b7a561fa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:31,181 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=f606805db73ce93b97e85f616fe8aa83, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:31,182 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516931181"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516931181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516931181"}]},"ts":"1689516931181"} 2023-07-16 14:15:31,181 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=8f63c6266ae7686a9fed10d20a11faea, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:31,182 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516931181"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516931181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516931181"}]},"ts":"1689516931181"} 2023-07-16 14:15:31,182 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516931181"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516931181"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516931181"}]},"ts":"1689516931181"} 2023-07-16 14:15:31,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure 4126ac014708caf85adc6612abd47a03, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:31,185 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=74, state=RUNNABLE; CloseRegionProcedure 27f6cefd355bbb2ff32b1fa098a07eb0, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:31,185 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=71, state=RUNNABLE; CloseRegionProcedure b86011524f8b82dc678471d8b7a561fa, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:31,187 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=70, state=RUNNABLE; CloseRegionProcedure f606805db73ce93b97e85f616fe8aa83, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:31,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=73, state=RUNNABLE; CloseRegionProcedure 8f63c6266ae7686a9fed10d20a11faea, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:31,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-16 14:15:31,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:31,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4126ac014708caf85adc6612abd47a03, disabling compactions & flushes 2023-07-16 14:15:31,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:31,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:31,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. after waiting 0 ms 2023-07-16 14:15:31,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:31,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:31,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b86011524f8b82dc678471d8b7a561fa, disabling compactions & flushes 2023-07-16 14:15:31,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:31,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:31,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. after waiting 0 ms 2023-07-16 14:15:31,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:31,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:31,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:31,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03. 2023-07-16 14:15:31,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4126ac014708caf85adc6612abd47a03: 2023-07-16 14:15:31,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa. 2023-07-16 14:15:31,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b86011524f8b82dc678471d8b7a561fa: 2023-07-16 14:15:31,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:31,352 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:31,352 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=4126ac014708caf85adc6612abd47a03, regionState=CLOSED 2023-07-16 14:15:31,353 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516931352"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516931352"}]},"ts":"1689516931352"} 2023-07-16 14:15:31,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:31,354 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:31,354 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=b86011524f8b82dc678471d8b7a561fa, regionState=CLOSED 2023-07-16 14:15:31,356 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516931354"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516931354"}]},"ts":"1689516931354"} 2023-07-16 14:15:31,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8f63c6266ae7686a9fed10d20a11faea, disabling compactions & flushes 2023-07-16 14:15:31,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:31,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f606805db73ce93b97e85f616fe8aa83, disabling compactions & flushes 2023-07-16 14:15:31,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:31,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. after waiting 0 ms 2023-07-16 14:15:31,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:31,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:31,356 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:31,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. after waiting 0 ms 2023-07-16 14:15:31,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:31,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=72 2023-07-16 14:15:31,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=72, state=SUCCESS; CloseRegionProcedure 4126ac014708caf85adc6612abd47a03, server=jenkins-hbase4.apache.org,34921,1689516920700 in 176 msec 2023-07-16 14:15:31,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=71 2023-07-16 14:15:31,366 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=71, state=SUCCESS; CloseRegionProcedure b86011524f8b82dc678471d8b7a561fa, server=jenkins-hbase4.apache.org,41933,1689516920766 in 176 msec 2023-07-16 14:15:31,367 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4126ac014708caf85adc6612abd47a03, UNASSIGN in 189 msec 2023-07-16 14:15:31,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b86011524f8b82dc678471d8b7a561fa, UNASSIGN in 190 msec 2023-07-16 14:15:31,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:31,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea. 2023-07-16 14:15:31,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8f63c6266ae7686a9fed10d20a11faea: 2023-07-16 14:15:31,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:31,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:31,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83. 2023-07-16 14:15:31,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f606805db73ce93b97e85f616fe8aa83: 2023-07-16 14:15:31,379 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=8f63c6266ae7686a9fed10d20a11faea, regionState=CLOSED 2023-07-16 14:15:31,379 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689516931379"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516931379"}]},"ts":"1689516931379"} 2023-07-16 14:15:31,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:31,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:31,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 27f6cefd355bbb2ff32b1fa098a07eb0, disabling compactions & flushes 2023-07-16 14:15:31,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:31,387 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=f606805db73ce93b97e85f616fe8aa83, regionState=CLOSED 2023-07-16 14:15:31,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:31,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. after waiting 0 ms 2023-07-16 14:15:31,387 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516931387"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516931387"}]},"ts":"1689516931387"} 2023-07-16 14:15:31,387 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:31,391 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=73 2023-07-16 14:15:31,392 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=73, state=SUCCESS; CloseRegionProcedure 8f63c6266ae7686a9fed10d20a11faea, server=jenkins-hbase4.apache.org,34921,1689516920700 in 196 msec 2023-07-16 14:15:31,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=70 2023-07-16 14:15:31,395 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8f63c6266ae7686a9fed10d20a11faea, UNASSIGN in 215 msec 2023-07-16 14:15:31,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:31,395 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=70, state=SUCCESS; CloseRegionProcedure f606805db73ce93b97e85f616fe8aa83, server=jenkins-hbase4.apache.org,41933,1689516920766 in 202 msec 2023-07-16 14:15:31,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0. 2023-07-16 14:15:31,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 27f6cefd355bbb2ff32b1fa098a07eb0: 2023-07-16 14:15:31,397 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f606805db73ce93b97e85f616fe8aa83, UNASSIGN in 219 msec 2023-07-16 14:15:31,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:31,399 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=74 updating hbase:meta row=27f6cefd355bbb2ff32b1fa098a07eb0, regionState=CLOSED 2023-07-16 14:15:31,400 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689516931399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516931399"}]},"ts":"1689516931399"} 2023-07-16 14:15:31,412 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=74 2023-07-16 14:15:31,413 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=74, state=SUCCESS; CloseRegionProcedure 27f6cefd355bbb2ff32b1fa098a07eb0, server=jenkins-hbase4.apache.org,41933,1689516920766 in 222 msec 2023-07-16 14:15:31,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=69 2023-07-16 14:15:31,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=27f6cefd355bbb2ff32b1fa098a07eb0, UNASSIGN in 236 msec 2023-07-16 14:15:31,416 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516931416"}]},"ts":"1689516931416"} 2023-07-16 14:15:31,418 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 14:15:31,420 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 14:15:31,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=69, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 261 msec 2023-07-16 14:15:31,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=69 2023-07-16 14:15:31,469 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 69 completed 2023-07-16 14:15:31,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,490 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=80, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_248150470' 2023-07-16 14:15:31,491 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=80, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,502 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:31,502 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:31,502 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:31,502 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:31,502 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:31,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:31,511 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/recovered.edits] 2023-07-16 14:15:31,511 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/recovered.edits] 2023-07-16 14:15:31,512 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/recovered.edits] 2023-07-16 14:15:31,512 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/recovered.edits] 2023-07-16 14:15:31,513 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/recovered.edits] 2023-07-16 14:15:31,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-16 14:15:31,525 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea/recovered.edits/4.seqid 2023-07-16 14:15:31,525 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83/recovered.edits/4.seqid 2023-07-16 14:15:31,526 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0/recovered.edits/4.seqid 2023-07-16 14:15:31,526 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03/recovered.edits/4.seqid 2023-07-16 14:15:31,527 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8f63c6266ae7686a9fed10d20a11faea 2023-07-16 14:15:31,527 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/27f6cefd355bbb2ff32b1fa098a07eb0 2023-07-16 14:15:31,528 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f606805db73ce93b97e85f616fe8aa83 2023-07-16 14:15:31,528 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4126ac014708caf85adc6612abd47a03 2023-07-16 14:15:31,528 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa/recovered.edits/4.seqid 2023-07-16 14:15:31,529 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b86011524f8b82dc678471d8b7a561fa 2023-07-16 14:15:31,529 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 14:15:31,534 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=80, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,541 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 14:15:31,545 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=80, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516931547"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516931547"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516931547"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516931547"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:31,547 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516931547"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:31,550 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 14:15:31,550 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f606805db73ce93b97e85f616fe8aa83, NAME => 'Group_testTableMoveTruncateAndDrop,,1689516930103.f606805db73ce93b97e85f616fe8aa83.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => b86011524f8b82dc678471d8b7a561fa, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689516930103.b86011524f8b82dc678471d8b7a561fa.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 4126ac014708caf85adc6612abd47a03, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689516930103.4126ac014708caf85adc6612abd47a03.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 8f63c6266ae7686a9fed10d20a11faea, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689516930103.8f63c6266ae7686a9fed10d20a11faea.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 27f6cefd355bbb2ff32b1fa098a07eb0, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689516930103.27f6cefd355bbb2ff32b1fa098a07eb0.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 14:15:31,550 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 14:15:31,551 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516931551"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:31,553 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 14:15:31,555 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=80, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 14:15:31,557 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 76 msec 2023-07-16 14:15:31,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=80 2023-07-16 14:15:31,618 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 80 completed 2023-07-16 14:15:31,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,619 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:31,623 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41933] ipc.CallRunner(144): callId: 167 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:49896 deadline: 1689516991623, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=43741 startCode=1689516920562. As of locationSeqNum=6. 2023-07-16 14:15:31,731 DEBUG [hconnection-0x5d23f00a-shared-pool-10] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:31,733 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47228, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:31,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:31,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:31,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:31,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup default 2023-07-16 14:15:31,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:31,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_248150470, current retry=0 2023-07-16 14:15:31,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_248150470 => default 2023-07-16 14:15:31,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:31,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_248150470 2023-07-16 14:15:31,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:31,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:31,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:31,775 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:31,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:31,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:31,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:31,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:31,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:31,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:31,791 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:31,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:31,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:31,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:31,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:31,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:31,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518131807, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:31,809 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:31,811 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:31,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,813 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:31,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:31,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:31,854 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=510 (was 424) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:52800 [Receiving block BP-90143098-172.31.14.131-1689516914396:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:49656 [Receiving block BP-90143098-172.31.14.131-1689516914396:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-90143098-172.31.14.131-1689516914396:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:42609 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:44287Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp808657323-639 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63627@0x02024450 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:52764 [Receiving block BP-90143098-172.31.14.131-1689516914396:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:49638 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:42609 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44287 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:49620 [Receiving block BP-90143098-172.31.14.131-1689516914396:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63627@0x02024450-SendThread(127.0.0.1:63627) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp808657323-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-982799967_17 at /127.0.0.1:52890 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-90143098-172.31.14.131-1689516914396:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1026350649_17 at /127.0.0.1:52834 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-90143098-172.31.14.131-1689516914396:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-640-acceptor-0@466459d7-ServerConnector@77e30ea5{HTTP/1.1, (http/1.1)}{0.0.0.0:45101} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:38750 [Receiving block BP-90143098-172.31.14.131-1689516914396:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-90143098-172.31.14.131-1689516914396:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:38726 [Receiving block BP-90143098-172.31.14.131-1689516914396:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1865733486_17 at /127.0.0.1:57316 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1-prefix:jenkins-hbase4.apache.org,44287,1689516924704 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-497c82a-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63627@0x02024450-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-90143098-172.31.14.131-1689516914396:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1-prefix:jenkins-hbase4.apache.org,44287,1689516924704.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-90143098-172.31.14.131-1689516914396:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44287-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp808657323-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=824 (was 682) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 459) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=2800 (was 3195) 2023-07-16 14:15:31,855 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 14:15:31,880 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=510, OpenFileDescriptor=824, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=176, AvailableMemoryMB=2800 2023-07-16 14:15:31,880 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 14:15:31,881 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-16 14:15:31,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:31,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:31,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:31,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:31,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:31,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:31,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:31,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:31,906 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:31,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:31,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:31,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:31,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:31,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:31,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518131924, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:31,925 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:31,927 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:31,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,933 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:31,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:31,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:31,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-16 14:15:31,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:31,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:59606 deadline: 1689518131936, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 14:15:31,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-16 14:15:31,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:31,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:59606 deadline: 1689518131940, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 14:15:31,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-16 14:15:31,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:31,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:59606 deadline: 1689518131942, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 14:15:31,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-16 14:15:31,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-16 14:15:31,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:31,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:31,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:31,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:31,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:31,965 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:31,965 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:31,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:31,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:31,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-16 14:15:31,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:31,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:31,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:31,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:31,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:31,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:31,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:31,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:31,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:31,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:31,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:31,995 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:31,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:31,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:31,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:32,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:32,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:32,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:32,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:32,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518132007, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:32,007 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:32,009 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:32,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,010 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:32,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:32,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:32,029 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=512 (was 510) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=823 (was 824), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=479 (was 479), ProcessCount=176 (was 176), AvailableMemoryMB=2797 (was 2800) 2023-07-16 14:15:32,029 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 14:15:32,048 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512, OpenFileDescriptor=823, MaxFileDescriptor=60000, SystemLoadAverage=479, ProcessCount=176, AvailableMemoryMB=2796 2023-07-16 14:15:32,048 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 14:15:32,048 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-16 14:15:32,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:32,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:32,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:32,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:32,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:32,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:32,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:32,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:32,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:32,072 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:32,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:32,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:32,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:32,079 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:32,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:32,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:32,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:32,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518132086, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:32,087 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:32,089 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:32,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,090 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:32,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:32,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:32,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:32,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:32,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-16 14:15:32,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:32,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 14:15:32,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:32,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:32,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:32,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:32,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:32,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741] to rsgroup bar 2023-07-16 14:15:32,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:32,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 14:15:32,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:32,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:32,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(238): Moving server region bb99c7296a6419e19ffe990276a43f38, which do not belong to RSGroup bar 2023-07-16 14:15:32,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE 2023-07-16 14:15:32,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 14:15:32,112 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE 2023-07-16 14:15:32,112 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:32,113 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516932112"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516932112"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516932112"}]},"ts":"1689516932112"} 2023-07-16 14:15:32,115 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:32,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,269 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bb99c7296a6419e19ffe990276a43f38, disabling compactions & flushes 2023-07-16 14:15:32,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. after waiting 0 ms 2023-07-16 14:15:32,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,282 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-16 14:15:32,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:32,284 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding bb99c7296a6419e19ffe990276a43f38 move to jenkins-hbase4.apache.org,44287,1689516924704 record at close sequenceid=10 2023-07-16 14:15:32,287 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,288 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=CLOSED 2023-07-16 14:15:32,288 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516932288"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516932288"}]},"ts":"1689516932288"} 2023-07-16 14:15:32,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-16 14:15:32,291 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,43741,1689516920562 in 175 msec 2023-07-16 14:15:32,292 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:32,443 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:32,443 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516932443"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516932443"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516932443"}]},"ts":"1689516932443"} 2023-07-16 14:15:32,445 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:32,602 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bb99c7296a6419e19ffe990276a43f38, NAME => 'hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:32,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:32,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,603 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,604 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,605 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info 2023-07-16 14:15:32,605 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info 2023-07-16 14:15:32,606 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bb99c7296a6419e19ffe990276a43f38 columnFamilyName info 2023-07-16 14:15:32,615 DEBUG [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(539): loaded hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/info/31a880b9adb041f4a9f7745816e07fde 2023-07-16 14:15:32,615 INFO [StoreOpener-bb99c7296a6419e19ffe990276a43f38-1] regionserver.HStore(310): Store=bb99c7296a6419e19ffe990276a43f38/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:32,616 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,618 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,621 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:32,622 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bb99c7296a6419e19ffe990276a43f38; next sequenceid=13; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11620395360, jitterRate=0.08223365247249603}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:32,622 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:32,623 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38., pid=83, masterSystemTime=1689516932597 2023-07-16 14:15:32,625 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,625 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:32,626 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=bb99c7296a6419e19ffe990276a43f38, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:32,626 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516932625"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516932625"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516932625"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516932625"}]},"ts":"1689516932625"} 2023-07-16 14:15:32,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-16 14:15:32,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure bb99c7296a6419e19ffe990276a43f38, server=jenkins-hbase4.apache.org,44287,1689516924704 in 182 msec 2023-07-16 14:15:32,631 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=bb99c7296a6419e19ffe990276a43f38, REOPEN/MOVE in 519 msec 2023-07-16 14:15:33,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-16 14:15:33,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766, jenkins-hbase4.apache.org,43741,1689516920562] are moved back to default 2023-07-16 14:15:33,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-16 14:15:33,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:33,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:33,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:33,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 14:15:33,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:33,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:33,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:33,124 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:33,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-16 14:15:33,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-16 14:15:33,126 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:33,127 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 14:15:33,127 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:33,127 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:33,130 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:33,131 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 empty. 2023-07-16 14:15:33,132 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,132 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 14:15:33,148 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:33,149 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6acdd1ace1e7b71fa11bbe5869a0cce0, NAME => 'Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:33,163 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:33,164 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 6acdd1ace1e7b71fa11bbe5869a0cce0, disabling compactions & flushes 2023-07-16 14:15:33,164 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,164 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,164 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. after waiting 0 ms 2023-07-16 14:15:33,164 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,164 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,164 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:33,166 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:33,167 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516933167"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516933167"}]},"ts":"1689516933167"} 2023-07-16 14:15:33,169 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:33,169 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:33,170 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516933170"}]},"ts":"1689516933170"} 2023-07-16 14:15:33,171 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-16 14:15:33,178 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, ASSIGN}] 2023-07-16 14:15:33,180 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, ASSIGN 2023-07-16 14:15:33,181 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:33,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-16 14:15:33,333 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:33,333 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516933333"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516933333"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516933333"}]},"ts":"1689516933333"} 2023-07-16 14:15:33,335 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:33,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-16 14:15:33,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6acdd1ace1e7b71fa11bbe5869a0cce0, NAME => 'Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:33,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:33,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,493 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,495 DEBUG [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f 2023-07-16 14:15:33,495 DEBUG [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f 2023-07-16 14:15:33,495 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6acdd1ace1e7b71fa11bbe5869a0cce0 columnFamilyName f 2023-07-16 14:15:33,496 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] regionserver.HStore(310): Store=6acdd1ace1e7b71fa11bbe5869a0cce0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:33,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:33,503 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6acdd1ace1e7b71fa11bbe5869a0cce0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10202368960, jitterRate=-0.049830347299575806}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:33,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:33,504 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0., pid=86, masterSystemTime=1689516933487 2023-07-16 14:15:33,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,505 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,506 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:33,506 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516933505"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516933505"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516933505"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516933505"}]},"ts":"1689516933505"} 2023-07-16 14:15:33,509 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-16 14:15:33,509 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704 in 172 msec 2023-07-16 14:15:33,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-16 14:15:33,511 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, ASSIGN in 331 msec 2023-07-16 14:15:33,511 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:33,512 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516933512"}]},"ts":"1689516933512"} 2023-07-16 14:15:33,513 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-16 14:15:33,515 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:33,517 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 394 msec 2023-07-16 14:15:33,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-16 14:15:33,730 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-16 14:15:33,730 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-16 14:15:33,730 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:33,735 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-16 14:15:33,736 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:33,736 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-16 14:15:33,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-16 14:15:33,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:33,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 14:15:33,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:33,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:33,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-16 14:15:33,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 6acdd1ace1e7b71fa11bbe5869a0cce0 to RSGroup bar 2023-07-16 14:15:33,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:33,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:33,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:33,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:33,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 14:15:33,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:33,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE 2023-07-16 14:15:33,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-16 14:15:33,746 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE 2023-07-16 14:15:33,747 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:33,747 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516933747"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516933747"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516933747"}]},"ts":"1689516933747"} 2023-07-16 14:15:33,751 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:33,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6acdd1ace1e7b71fa11bbe5869a0cce0, disabling compactions & flushes 2023-07-16 14:15:33,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. after waiting 0 ms 2023-07-16 14:15:33,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:33,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:33,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:33,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6acdd1ace1e7b71fa11bbe5869a0cce0 move to jenkins-hbase4.apache.org,34921,1689516920700 record at close sequenceid=2 2023-07-16 14:15:33,916 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:33,916 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=CLOSED 2023-07-16 14:15:33,917 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516933916"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516933916"}]},"ts":"1689516933916"} 2023-07-16 14:15:33,920 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-16 14:15:33,920 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704 in 169 msec 2023-07-16 14:15:33,921 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:34,071 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:34,072 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:34,072 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516934072"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516934072"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516934072"}]},"ts":"1689516934072"} 2023-07-16 14:15:34,074 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:34,231 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6acdd1ace1e7b71fa11bbe5869a0cce0, NAME => 'Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,234 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,235 DEBUG [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f 2023-07-16 14:15:34,235 DEBUG [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f 2023-07-16 14:15:34,235 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6acdd1ace1e7b71fa11bbe5869a0cce0 columnFamilyName f 2023-07-16 14:15:34,240 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] regionserver.HStore(310): Store=6acdd1ace1e7b71fa11bbe5869a0cce0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:34,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,243 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6acdd1ace1e7b71fa11bbe5869a0cce0; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11118147520, jitterRate=0.03545817732810974}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:34,250 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:34,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0., pid=89, masterSystemTime=1689516934226 2023-07-16 14:15:34,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,254 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:34,254 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516934254"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516934254"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516934254"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516934254"}]},"ts":"1689516934254"} 2023-07-16 14:15:34,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-16 14:15:34,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,34921,1689516920700 in 181 msec 2023-07-16 14:15:34,262 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE in 515 msec 2023-07-16 14:15:34,346 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 14:15:34,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-16 14:15:34,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-16 14:15:34,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:34,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:34,751 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:34,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 14:15:34,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:34,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 14:15:34,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:34,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:59606 deadline: 1689518134759, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-16 14:15:34,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741] to rsgroup default 2023-07-16 14:15:34,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:34,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:59606 deadline: 1689518134760, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-16 14:15:34,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-16 14:15:34,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:34,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 14:15:34,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:34,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:34,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-16 14:15:34,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 6acdd1ace1e7b71fa11bbe5869a0cce0 to RSGroup default 2023-07-16 14:15:34,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE 2023-07-16 14:15:34,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 14:15:34,775 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE 2023-07-16 14:15:34,777 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:34,777 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516934776"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516934776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516934776"}]},"ts":"1689516934776"} 2023-07-16 14:15:34,779 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:34,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6acdd1ace1e7b71fa11bbe5869a0cce0, disabling compactions & flushes 2023-07-16 14:15:34,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. after waiting 0 ms 2023-07-16 14:15:34,934 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,939 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:34,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:34,941 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:34,941 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6acdd1ace1e7b71fa11bbe5869a0cce0 move to jenkins-hbase4.apache.org,44287,1689516924704 record at close sequenceid=5 2023-07-16 14:15:34,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:34,944 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=CLOSED 2023-07-16 14:15:34,944 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516934944"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516934944"}]},"ts":"1689516934944"} 2023-07-16 14:15:34,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-16 14:15:34,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,34921,1689516920700 in 167 msec 2023-07-16 14:15:34,948 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:35,099 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:35,099 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516935099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516935099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516935099"}]},"ts":"1689516935099"} 2023-07-16 14:15:35,101 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:35,260 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:35,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6acdd1ace1e7b71fa11bbe5869a0cce0, NAME => 'Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:35,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,260 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:35,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,261 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,276 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,278 DEBUG [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f 2023-07-16 14:15:35,278 DEBUG [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f 2023-07-16 14:15:35,281 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6acdd1ace1e7b71fa11bbe5869a0cce0 columnFamilyName f 2023-07-16 14:15:35,282 INFO [StoreOpener-6acdd1ace1e7b71fa11bbe5869a0cce0-1] regionserver.HStore(310): Store=6acdd1ace1e7b71fa11bbe5869a0cce0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:35,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,285 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:35,292 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6acdd1ace1e7b71fa11bbe5869a0cce0; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9936267200, jitterRate=-0.07461300492286682}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:35,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:35,293 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0., pid=92, masterSystemTime=1689516935254 2023-07-16 14:15:35,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:35,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:35,297 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:35,297 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516935297"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516935297"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516935297"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516935297"}]},"ts":"1689516935297"} 2023-07-16 14:15:35,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-16 14:15:35,301 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704 in 198 msec 2023-07-16 14:15:35,304 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, REOPEN/MOVE in 528 msec 2023-07-16 14:15:35,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-16 14:15:35,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-16 14:15:35,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:35,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:35,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:35,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 14:15:35,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:35,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:59606 deadline: 1689518135783, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-16 14:15:35,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741] to rsgroup default 2023-07-16 14:15:35,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:35,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 14:15:35,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:35,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:35,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-16 14:15:35,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766, jenkins-hbase4.apache.org,43741,1689516920562] are moved back to bar 2023-07-16 14:15:35,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-16 14:15:35,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:35,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:35,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:35,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 14:15:35,807 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43741] ipc.CallRunner(144): callId: 222 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:47228 deadline: 1689516995807, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44287 startCode=1689516924704. As of locationSeqNum=10. 2023-07-16 14:15:35,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:35,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:35,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:35,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:35,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:35,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:35,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:35,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:35,931 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-16 14:15:35,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-16 14:15:35,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:35,937 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516935937"}]},"ts":"1689516935937"} 2023-07-16 14:15:35,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 14:15:35,939 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-16 14:15:35,941 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-16 14:15:35,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, UNASSIGN}] 2023-07-16 14:15:35,946 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, UNASSIGN 2023-07-16 14:15:35,952 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:35,953 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516935952"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516935952"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516935952"}]},"ts":"1689516935952"} 2023-07-16 14:15:35,955 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:36,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 14:15:36,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:36,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6acdd1ace1e7b71fa11bbe5869a0cce0, disabling compactions & flushes 2023-07-16 14:15:36,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:36,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:36,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. after waiting 0 ms 2023-07-16 14:15:36,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:36,113 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 14:15:36,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0. 2023-07-16 14:15:36,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6acdd1ace1e7b71fa11bbe5869a0cce0: 2023-07-16 14:15:36,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:36,117 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6acdd1ace1e7b71fa11bbe5869a0cce0, regionState=CLOSED 2023-07-16 14:15:36,117 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689516936117"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516936117"}]},"ts":"1689516936117"} 2023-07-16 14:15:36,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-16 14:15:36,120 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 6acdd1ace1e7b71fa11bbe5869a0cce0, server=jenkins-hbase4.apache.org,44287,1689516924704 in 164 msec 2023-07-16 14:15:36,122 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-16 14:15:36,122 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6acdd1ace1e7b71fa11bbe5869a0cce0, UNASSIGN in 178 msec 2023-07-16 14:15:36,123 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516936123"}]},"ts":"1689516936123"} 2023-07-16 14:15:36,127 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-16 14:15:36,129 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-16 14:15:36,131 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 198 msec 2023-07-16 14:15:36,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 14:15:36,245 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-16 14:15:36,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-16 14:15:36,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:36,249 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:36,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-16 14:15:36,250 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:36,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:36,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:36,257 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:36,259 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits] 2023-07-16 14:15:36,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-16 14:15:36,266 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits/10.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0/recovered.edits/10.seqid 2023-07-16 14:15:36,267 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testFailRemoveGroup/6acdd1ace1e7b71fa11bbe5869a0cce0 2023-07-16 14:15:36,267 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 14:15:36,269 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:36,272 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-16 14:15:36,275 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-16 14:15:36,276 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:36,276 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-16 14:15:36,276 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516936276"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:36,278 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 14:15:36,278 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6acdd1ace1e7b71fa11bbe5869a0cce0, NAME => 'Group_testFailRemoveGroup,,1689516933120.6acdd1ace1e7b71fa11bbe5869a0cce0.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 14:15:36,278 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-16 14:15:36,278 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516936278"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:36,280 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-16 14:15:36,282 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 14:15:36,283 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 36 msec 2023-07-16 14:15:36,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-16 14:15:36,362 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-16 14:15:36,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:36,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:36,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:36,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:36,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:36,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:36,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:36,376 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:36,379 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:36,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:36,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:36,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:36,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:36,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:36,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:36,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518136394, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:36,395 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:36,397 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:36,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,398 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:36,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:36,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:36,425 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=514 (was 512) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-37170703_17 at /127.0.0.1:51428 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:57316 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-37170703_17 at /127.0.0.1:51400 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a7dbe2d-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=821 (was 823), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=449 (was 479), ProcessCount=176 (was 176), AvailableMemoryMB=2597 (was 2796) 2023-07-16 14:15:36,425 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 14:15:36,446 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=514, OpenFileDescriptor=821, MaxFileDescriptor=60000, SystemLoadAverage=449, ProcessCount=176, AvailableMemoryMB=2597 2023-07-16 14:15:36,446 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 14:15:36,447 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-16 14:15:36,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:36,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:36,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:36,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:36,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:36,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:36,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,458 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:36,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:36,463 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:36,463 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:36,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,466 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:36,468 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:36,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:36,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:36,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:36,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518136476, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:36,476 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:36,481 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:36,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,482 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:36,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:36,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:36,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:36,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:36,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_961270657 2023-07-16 14:15:36,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:36,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:36,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:36,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:36,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921] to rsgroup Group_testMultiTableMove_961270657 2023-07-16 14:15:36,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:36,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:36,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:36,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 14:15:36,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700] are moved back to default 2023-07-16 14:15:36,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_961270657 2023-07-16 14:15:36,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:36,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:36,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:36,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_961270657 2023-07-16 14:15:36,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:36,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:36,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:36,512 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:36,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-16 14:15:36,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 14:15:36,514 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:36,515 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:36,515 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:36,516 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:36,522 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:36,524 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,524 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b empty. 2023-07-16 14:15:36,525 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,525 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 14:15:36,542 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:36,544 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2326394a3fc4bb1eda3b7f6ae195158b, NAME => 'GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:36,558 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:36,558 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 2326394a3fc4bb1eda3b7f6ae195158b, disabling compactions & flushes 2023-07-16 14:15:36,558 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,558 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,558 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. after waiting 0 ms 2023-07-16 14:15:36,558 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,558 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,559 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 2326394a3fc4bb1eda3b7f6ae195158b: 2023-07-16 14:15:36,561 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:36,562 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516936562"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516936562"}]},"ts":"1689516936562"} 2023-07-16 14:15:36,564 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:36,565 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:36,565 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516936565"}]},"ts":"1689516936565"} 2023-07-16 14:15:36,566 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-16 14:15:36,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:36,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:36,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:36,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:36,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:36,571 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, ASSIGN}] 2023-07-16 14:15:36,573 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, ASSIGN 2023-07-16 14:15:36,574 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:36,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 14:15:36,724 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:36,726 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:36,726 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516936726"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516936726"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516936726"}]},"ts":"1689516936726"} 2023-07-16 14:15:36,728 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:36,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 14:15:36,875 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 14:15:36,883 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2326394a3fc4bb1eda3b7f6ae195158b, NAME => 'GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:36,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:36,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,886 INFO [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,887 DEBUG [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/f 2023-07-16 14:15:36,887 DEBUG [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/f 2023-07-16 14:15:36,888 INFO [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2326394a3fc4bb1eda3b7f6ae195158b columnFamilyName f 2023-07-16 14:15:36,889 INFO [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] regionserver.HStore(310): Store=2326394a3fc4bb1eda3b7f6ae195158b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:36,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,892 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:36,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:36,895 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2326394a3fc4bb1eda3b7f6ae195158b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9416792800, jitterRate=-0.12299282848834991}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:36,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2326394a3fc4bb1eda3b7f6ae195158b: 2023-07-16 14:15:36,896 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b., pid=99, masterSystemTime=1689516936879 2023-07-16 14:15:36,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,897 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:36,898 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:36,898 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516936898"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516936898"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516936898"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516936898"}]},"ts":"1689516936898"} 2023-07-16 14:15:36,901 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-16 14:15:36,901 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,41933,1689516920766 in 171 msec 2023-07-16 14:15:36,902 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-16 14:15:36,902 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, ASSIGN in 330 msec 2023-07-16 14:15:36,903 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:36,903 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516936903"}]},"ts":"1689516936903"} 2023-07-16 14:15:36,905 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-16 14:15:36,907 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:36,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 398 msec 2023-07-16 14:15:37,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 14:15:37,116 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-16 14:15:37,117 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-16 14:15:37,117 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:37,121 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-16 14:15:37,122 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:37,122 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-16 14:15:37,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:37,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:37,126 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:37,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-16 14:15:37,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-16 14:15:37,129 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:37,129 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:37,130 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:37,130 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:37,132 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:37,134 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,134 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c empty. 2023-07-16 14:15:37,135 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,135 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 14:15:37,151 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:37,153 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => c5ce722c8a8bf7c4b5c32334c710d63c, NAME => 'GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:37,179 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:37,179 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing c5ce722c8a8bf7c4b5c32334c710d63c, disabling compactions & flushes 2023-07-16 14:15:37,179 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,179 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,179 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. after waiting 0 ms 2023-07-16 14:15:37,179 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,179 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,179 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for c5ce722c8a8bf7c4b5c32334c710d63c: 2023-07-16 14:15:37,182 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:37,183 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937183"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516937183"}]},"ts":"1689516937183"} 2023-07-16 14:15:37,185 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:37,186 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:37,186 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516937186"}]},"ts":"1689516937186"} 2023-07-16 14:15:37,187 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-16 14:15:37,191 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:37,192 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:37,192 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:37,192 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:37,192 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:37,192 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, ASSIGN}] 2023-07-16 14:15:37,194 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, ASSIGN 2023-07-16 14:15:37,196 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:37,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-16 14:15:37,346 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:37,348 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:37,348 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937348"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516937348"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516937348"}]},"ts":"1689516937348"} 2023-07-16 14:15:37,350 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:37,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-16 14:15:37,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5ce722c8a8bf7c4b5c32334c710d63c, NAME => 'GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:37,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:37,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,509 INFO [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,511 DEBUG [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/f 2023-07-16 14:15:37,511 DEBUG [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/f 2023-07-16 14:15:37,512 INFO [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5ce722c8a8bf7c4b5c32334c710d63c columnFamilyName f 2023-07-16 14:15:37,512 INFO [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] regionserver.HStore(310): Store=c5ce722c8a8bf7c4b5c32334c710d63c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:37,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:37,524 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5ce722c8a8bf7c4b5c32334c710d63c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10681296480, jitterRate=-0.005226746201515198}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:37,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5ce722c8a8bf7c4b5c32334c710d63c: 2023-07-16 14:15:37,525 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c., pid=102, masterSystemTime=1689516937502 2023-07-16 14:15:37,526 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,526 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,527 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:37,527 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937527"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516937527"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516937527"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516937527"}]},"ts":"1689516937527"} 2023-07-16 14:15:37,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-16 14:15:37,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,43741,1689516920562 in 178 msec 2023-07-16 14:15:37,532 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-16 14:15:37,533 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, ASSIGN in 338 msec 2023-07-16 14:15:37,534 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:37,534 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516937534"}]},"ts":"1689516937534"} 2023-07-16 14:15:37,536 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-16 14:15:37,539 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:37,542 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 415 msec 2023-07-16 14:15:37,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-16 14:15:37,732 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-16 14:15:37,732 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-16 14:15:37,732 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:37,748 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-16 14:15:37,748 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:37,748 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-16 14:15:37,749 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:37,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 14:15:37,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:37,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 14:15:37,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:37,762 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_961270657 2023-07-16 14:15:37,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_961270657 2023-07-16 14:15:37,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:37,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:37,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:37,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:37,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_961270657 2023-07-16 14:15:37,770 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region c5ce722c8a8bf7c4b5c32334c710d63c to RSGroup Group_testMultiTableMove_961270657 2023-07-16 14:15:37,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, REOPEN/MOVE 2023-07-16 14:15:37,771 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_961270657 2023-07-16 14:15:37,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 2326394a3fc4bb1eda3b7f6ae195158b to RSGroup Group_testMultiTableMove_961270657 2023-07-16 14:15:37,772 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, REOPEN/MOVE 2023-07-16 14:15:37,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, REOPEN/MOVE 2023-07-16 14:15:37,773 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:37,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_961270657, current retry=0 2023-07-16 14:15:37,776 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, REOPEN/MOVE 2023-07-16 14:15:37,776 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937773"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516937773"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516937773"}]},"ts":"1689516937773"} 2023-07-16 14:15:37,776 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:37,776 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937776"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516937776"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516937776"}]},"ts":"1689516937776"} 2023-07-16 14:15:37,777 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:37,779 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:37,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5ce722c8a8bf7c4b5c32334c710d63c, disabling compactions & flushes 2023-07-16 14:15:37,931 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. after waiting 0 ms 2023-07-16 14:15:37,931 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:37,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2326394a3fc4bb1eda3b7f6ae195158b, disabling compactions & flushes 2023-07-16 14:15:37,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:37,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:37,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. after waiting 0 ms 2023-07-16 14:15:37,933 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:37,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:37,936 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:37,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:37,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2326394a3fc4bb1eda3b7f6ae195158b: 2023-07-16 14:15:37,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:37,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2326394a3fc4bb1eda3b7f6ae195158b move to jenkins-hbase4.apache.org,34921,1689516920700 record at close sequenceid=2 2023-07-16 14:15:37,937 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5ce722c8a8bf7c4b5c32334c710d63c: 2023-07-16 14:15:37,937 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding c5ce722c8a8bf7c4b5c32334c710d63c move to jenkins-hbase4.apache.org,34921,1689516920700 record at close sequenceid=2 2023-07-16 14:15:37,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:37,939 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=CLOSED 2023-07-16 14:15:37,939 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937939"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516937939"}]},"ts":"1689516937939"} 2023-07-16 14:15:37,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:37,940 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=CLOSED 2023-07-16 14:15:37,940 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516937940"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516937940"}]},"ts":"1689516937940"} 2023-07-16 14:15:37,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-16 14:15:37,943 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,41933,1689516920766 in 163 msec 2023-07-16 14:15:37,943 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:37,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-16 14:15:37,944 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,43741,1689516920562 in 164 msec 2023-07-16 14:15:37,944 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34921,1689516920700; forceNewPlan=false, retain=false 2023-07-16 14:15:38,094 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:38,094 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:38,094 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516938094"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516938094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516938094"}]},"ts":"1689516938094"} 2023-07-16 14:15:38,094 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516938094"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516938094"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516938094"}]},"ts":"1689516938094"} 2023-07-16 14:15:38,096 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=104, state=RUNNABLE; OpenRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:38,097 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=103, state=RUNNABLE; OpenRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:38,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2326394a3fc4bb1eda3b7f6ae195158b, NAME => 'GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:38,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,254 INFO [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,255 DEBUG [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/f 2023-07-16 14:15:38,255 DEBUG [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/f 2023-07-16 14:15:38,256 INFO [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2326394a3fc4bb1eda3b7f6ae195158b columnFamilyName f 2023-07-16 14:15:38,256 INFO [StoreOpener-2326394a3fc4bb1eda3b7f6ae195158b-1] regionserver.HStore(310): Store=2326394a3fc4bb1eda3b7f6ae195158b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:38,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,262 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:38,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2326394a3fc4bb1eda3b7f6ae195158b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9490281760, jitterRate=-0.11614863574504852}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:38,263 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2326394a3fc4bb1eda3b7f6ae195158b: 2023-07-16 14:15:38,263 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b., pid=107, masterSystemTime=1689516938248 2023-07-16 14:15:38,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:38,265 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:38,265 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:38,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c5ce722c8a8bf7c4b5c32334c710d63c, NAME => 'GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:38,265 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:38,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,265 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516938265"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516938265"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516938265"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516938265"}]},"ts":"1689516938265"} 2023-07-16 14:15:38,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:38,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,267 INFO [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,268 DEBUG [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/f 2023-07-16 14:15:38,268 DEBUG [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/f 2023-07-16 14:15:38,269 INFO [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c5ce722c8a8bf7c4b5c32334c710d63c columnFamilyName f 2023-07-16 14:15:38,269 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=104 2023-07-16 14:15:38,269 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=104, state=SUCCESS; OpenRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,34921,1689516920700 in 171 msec 2023-07-16 14:15:38,269 INFO [StoreOpener-c5ce722c8a8bf7c4b5c32334c710d63c-1] regionserver.HStore(310): Store=c5ce722c8a8bf7c4b5c32334c710d63c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:38,270 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, REOPEN/MOVE in 497 msec 2023-07-16 14:15:38,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:38,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c5ce722c8a8bf7c4b5c32334c710d63c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11181205600, jitterRate=0.04133091866970062}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:38,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c5ce722c8a8bf7c4b5c32334c710d63c: 2023-07-16 14:15:38,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c., pid=108, masterSystemTime=1689516938248 2023-07-16 14:15:38,278 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:38,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:38,278 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:38,279 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516938278"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516938278"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516938278"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516938278"}]},"ts":"1689516938278"} 2023-07-16 14:15:38,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=103 2023-07-16 14:15:38,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=103, state=SUCCESS; OpenRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,34921,1689516920700 in 183 msec 2023-07-16 14:15:38,283 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, REOPEN/MOVE in 512 msec 2023-07-16 14:15:38,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-16 14:15:38,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_961270657. 2023-07-16 14:15:38,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:38,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:38,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:38,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 14:15:38,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:38,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 14:15:38,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:38,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:38,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:38,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_961270657 2023-07-16 14:15:38,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:38,787 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-16 14:15:38,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-16 14:15:38,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:38,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 14:15:38,792 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516938792"}]},"ts":"1689516938792"} 2023-07-16 14:15:38,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 14:15:38,979 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-16 14:15:38,981 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-16 14:15:38,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, UNASSIGN}] 2023-07-16 14:15:38,985 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, UNASSIGN 2023-07-16 14:15:38,986 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:38,986 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516938986"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516938986"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516938986"}]},"ts":"1689516938986"} 2023-07-16 14:15:38,989 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:39,141 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:39,142 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2326394a3fc4bb1eda3b7f6ae195158b, disabling compactions & flushes 2023-07-16 14:15:39,143 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:39,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:39,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. after waiting 0 ms 2023-07-16 14:15:39,143 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:39,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:39,149 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b. 2023-07-16 14:15:39,149 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2326394a3fc4bb1eda3b7f6ae195158b: 2023-07-16 14:15:39,151 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:39,151 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=2326394a3fc4bb1eda3b7f6ae195158b, regionState=CLOSED 2023-07-16 14:15:39,151 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516939151"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516939151"}]},"ts":"1689516939151"} 2023-07-16 14:15:39,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-16 14:15:39,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 2326394a3fc4bb1eda3b7f6ae195158b, server=jenkins-hbase4.apache.org,34921,1689516920700 in 163 msec 2023-07-16 14:15:39,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-16 14:15:39,156 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=2326394a3fc4bb1eda3b7f6ae195158b, UNASSIGN in 171 msec 2023-07-16 14:15:39,156 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516939156"}]},"ts":"1689516939156"} 2023-07-16 14:15:39,157 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-16 14:15:39,159 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-16 14:15:39,161 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 372 msec 2023-07-16 14:15:39,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 14:15:39,180 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-16 14:15:39,182 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-16 14:15:39,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:39,185 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:39,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_961270657' 2023-07-16 14:15:39,186 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:39,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,191 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:39,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:39,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:39,193 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/recovered.edits] 2023-07-16 14:15:39,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-16 14:15:39,200 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b/recovered.edits/7.seqid 2023-07-16 14:15:39,201 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveA/2326394a3fc4bb1eda3b7f6ae195158b 2023-07-16 14:15:39,201 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 14:15:39,204 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:39,207 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-16 14:15:39,209 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-16 14:15:39,210 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:39,210 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-16 14:15:39,211 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516939210"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:39,212 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 14:15:39,213 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 2326394a3fc4bb1eda3b7f6ae195158b, NAME => 'GrouptestMultiTableMoveA,,1689516936508.2326394a3fc4bb1eda3b7f6ae195158b.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 14:15:39,213 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-16 14:15:39,213 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516939213"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:39,215 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-16 14:15:39,218 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 14:15:39,226 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 36 msec 2023-07-16 14:15:39,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-16 14:15:39,298 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-16 14:15:39,299 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-16 14:15:39,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-16 14:15:39,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 14:15:39,305 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516939305"}]},"ts":"1689516939305"} 2023-07-16 14:15:39,306 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-16 14:15:39,308 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-16 14:15:39,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, UNASSIGN}] 2023-07-16 14:15:39,314 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, UNASSIGN 2023-07-16 14:15:39,315 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:39,315 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516939315"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516939315"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516939315"}]},"ts":"1689516939315"} 2023-07-16 14:15:39,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,34921,1689516920700}] 2023-07-16 14:15:39,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 14:15:39,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:39,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c5ce722c8a8bf7c4b5c32334c710d63c, disabling compactions & flushes 2023-07-16 14:15:39,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:39,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:39,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. after waiting 0 ms 2023-07-16 14:15:39,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:39,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:39,477 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c. 2023-07-16 14:15:39,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c5ce722c8a8bf7c4b5c32334c710d63c: 2023-07-16 14:15:39,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:39,480 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 14:15:39,481 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=c5ce722c8a8bf7c4b5c32334c710d63c, regionState=CLOSED 2023-07-16 14:15:39,481 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689516939481"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516939481"}]},"ts":"1689516939481"} 2023-07-16 14:15:39,487 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-16 14:15:39,487 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure c5ce722c8a8bf7c4b5c32334c710d63c, server=jenkins-hbase4.apache.org,34921,1689516920700 in 167 msec 2023-07-16 14:15:39,489 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-16 14:15:39,489 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=c5ce722c8a8bf7c4b5c32334c710d63c, UNASSIGN in 176 msec 2023-07-16 14:15:39,498 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516939498"}]},"ts":"1689516939498"} 2023-07-16 14:15:39,500 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-16 14:15:39,502 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-16 14:15:39,510 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 208 msec 2023-07-16 14:15:39,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 14:15:39,607 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-16 14:15:39,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-16 14:15:39,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,611 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_961270657' 2023-07-16 14:15:39,612 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:39,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:39,617 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:39,619 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/recovered.edits] 2023-07-16 14:15:39,623 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-16 14:15:39,626 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/recovered.edits/7.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c/recovered.edits/7.seqid 2023-07-16 14:15:39,627 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/GrouptestMultiTableMoveB/c5ce722c8a8bf7c4b5c32334c710d63c 2023-07-16 14:15:39,627 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 14:15:39,630 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,632 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-16 14:15:39,634 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-16 14:15:39,636 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,636 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-16 14:15:39,636 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516939636"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:39,638 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 14:15:39,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c5ce722c8a8bf7c4b5c32334c710d63c, NAME => 'GrouptestMultiTableMoveB,,1689516937123.c5ce722c8a8bf7c4b5c32334c710d63c.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 14:15:39,638 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-16 14:15:39,638 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516939638"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:39,643 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-16 14:15:39,645 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 14:15:39,647 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 37 msec 2023-07-16 14:15:39,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-16 14:15:39,725 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-16 14:15:39,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:39,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:39,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:39,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921] to rsgroup default 2023-07-16 14:15:39,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_961270657 2023-07-16 14:15:39,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:39,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_961270657, current retry=0 2023-07-16 14:15:39,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700] are moved back to Group_testMultiTableMove_961270657 2023-07-16 14:15:39,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_961270657 => default 2023-07-16 14:15:39,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_961270657 2023-07-16 14:15:39,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:39,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:39,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:39,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:39,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:39,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:39,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,746 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:39,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:39,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:39,755 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:39,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:39,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:39,761 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:39,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,768 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:39,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518139768, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:39,769 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:39,771 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:39,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,772 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:39,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:39,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,794 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511 (was 514), OpenFileDescriptor=814 (was 821), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 449) - SystemLoadAverage LEAK? -, ProcessCount=176 (was 176), AvailableMemoryMB=2430 (was 2597) 2023-07-16 14:15:39,794 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-16 14:15:39,815 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=511, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=176, AvailableMemoryMB=2429 2023-07-16 14:15:39,815 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-16 14:15:39,815 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-16 14:15:39,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:39,821 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:39,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:39,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:39,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:39,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:39,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:39,832 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:39,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:39,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:39,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:39,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:39,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518139846, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:39,847 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:39,849 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:39,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,850 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:39,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:39,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:39,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-16 14:15:39,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:39,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:39,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup oldGroup 2023-07-16 14:15:39,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:39,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 14:15:39,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to default 2023-07-16 14:15:39,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-16 14:15:39,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 14:15:39,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 14:15:39,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:39,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-16 14:15:39,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 14:15:39,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,891 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:39,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:39,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43741] to rsgroup anotherRSGroup 2023-07-16 14:15:39,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 14:15:39,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:39,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 14:15:39,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43741,1689516920562] are moved back to default 2023-07-16 14:15:39,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-16 14:15:39,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 14:15:39,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 14:15:39,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:39,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-16 14:15:39,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:59606 deadline: 1689518139916, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-16 14:15:39,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-16 14:15:39,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:59606 deadline: 1689518139919, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-16 14:15:39,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-16 14:15:39,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:59606 deadline: 1689518139920, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-16 14:15:39,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-16 14:15:39,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:59606 deadline: 1689518139921, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-16 14:15:39,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:39,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:39,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43741] to rsgroup default 2023-07-16 14:15:39,930 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 14:15:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:39,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-16 14:15:39,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43741,1689516920562] are moved back to anotherRSGroup 2023-07-16 14:15:39,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-16 14:15:39,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-16 14:15:39,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 14:15:39,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:39,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:39,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:39,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:39,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup default 2023-07-16 14:15:39,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 14:15:39,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-16 14:15:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to oldGroup 2023-07-16 14:15:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-16 14:15:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-16 14:15:39,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,955 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:39,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:39,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:39,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:39,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:39,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:39,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:39,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:39,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:39,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:39,966 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:39,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:39,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:39,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:39,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:39,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:39,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:39,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:39,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518139977, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:39,978 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:39,980 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:39,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:39,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:39,981 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:39,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:39,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:40,001 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=515 (was 511) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=509 (was 509), ProcessCount=176 (was 176), AvailableMemoryMB=2425 (was 2429) 2023-07-16 14:15:40,001 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-16 14:15:40,022 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=515, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=509, ProcessCount=176, AvailableMemoryMB=2423 2023-07-16 14:15:40,023 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-16 14:15:40,023 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-16 14:15:40,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:40,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:40,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:40,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:40,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:40,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:40,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:40,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:40,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:40,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:40,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:40,039 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:40,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:40,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:40,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:40,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:40,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:40,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:40,048 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:40,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:40,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:40,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518140050, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:40,051 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:40,053 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:40,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:40,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:40,054 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:40,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:40,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:40,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:40,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:40,057 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-16 14:15:40,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:40,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:40,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:40,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:40,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:40,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:40,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:40,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup oldgroup 2023-07-16 14:15:40,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:40,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:40,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:40,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:40,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 14:15:40,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to default 2023-07-16 14:15:40,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-16 14:15:40,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:40,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:40,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:40,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 14:15:40,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:40,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:40,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-16 14:15:40,092 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:40,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-16 14:15:40,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-16 14:15:40,098 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:40,100 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:40,101 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:40,101 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:40,105 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:40,107 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,108 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 empty. 2023-07-16 14:15:40,109 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,109 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-16 14:15:40,141 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:40,148 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 30a8ae896fd23311b4a9b3f859e17ea6, NAME => 'testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:40,178 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:40,179 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 30a8ae896fd23311b4a9b3f859e17ea6, disabling compactions & flushes 2023-07-16 14:15:40,179 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,179 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,179 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. after waiting 0 ms 2023-07-16 14:15:40,179 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,179 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,179 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:40,182 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:40,183 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516940183"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516940183"}]},"ts":"1689516940183"} 2023-07-16 14:15:40,185 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:40,186 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:40,186 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516940186"}]},"ts":"1689516940186"} 2023-07-16 14:15:40,188 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-16 14:15:40,191 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:40,191 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:40,192 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:40,192 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:40,192 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, ASSIGN}] 2023-07-16 14:15:40,194 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, ASSIGN 2023-07-16 14:15:40,195 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:40,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-16 14:15:40,345 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:40,347 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:40,347 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516940347"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516940347"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516940347"}]},"ts":"1689516940347"} 2023-07-16 14:15:40,349 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:40,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-16 14:15:40,504 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,504 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30a8ae896fd23311b4a9b3f859e17ea6, NAME => 'testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:40,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:40,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,507 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,508 DEBUG [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/tr 2023-07-16 14:15:40,508 DEBUG [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/tr 2023-07-16 14:15:40,509 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30a8ae896fd23311b4a9b3f859e17ea6 columnFamilyName tr 2023-07-16 14:15:40,509 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] regionserver.HStore(310): Store=30a8ae896fd23311b4a9b3f859e17ea6/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:40,510 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,515 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:40,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 30a8ae896fd23311b4a9b3f859e17ea6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11239372960, jitterRate=0.04674817621707916}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:40,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:40,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6., pid=119, masterSystemTime=1689516940500 2023-07-16 14:15:40,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,519 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:40,519 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516940518"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516940518"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516940518"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516940518"}]},"ts":"1689516940518"} 2023-07-16 14:15:40,521 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-16 14:15:40,522 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,43741,1689516920562 in 171 msec 2023-07-16 14:15:40,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-16 14:15:40,523 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, ASSIGN in 329 msec 2023-07-16 14:15:40,524 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:40,524 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516940524"}]},"ts":"1689516940524"} 2023-07-16 14:15:40,525 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-16 14:15:40,527 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:40,529 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 441 msec 2023-07-16 14:15:40,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-16 14:15:40,698 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-16 14:15:40,698 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-16 14:15:40,698 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:40,702 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-16 14:15:40,703 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:40,703 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-16 14:15:40,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-16 14:15:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:40,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:40,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:40,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:40,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-16 14:15:40,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 30a8ae896fd23311b4a9b3f859e17ea6 to RSGroup oldgroup 2023-07-16 14:15:40,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:40,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:40,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:40,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:40,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:40,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE 2023-07-16 14:15:40,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-16 14:15:40,712 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE 2023-07-16 14:15:40,712 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:40,712 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516940712"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516940712"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516940712"}]},"ts":"1689516940712"} 2023-07-16 14:15:40,714 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:40,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 30a8ae896fd23311b4a9b3f859e17ea6, disabling compactions & flushes 2023-07-16 14:15:40,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. after waiting 0 ms 2023-07-16 14:15:40,868 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:40,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:40,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:40,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 30a8ae896fd23311b4a9b3f859e17ea6 move to jenkins-hbase4.apache.org,41933,1689516920766 record at close sequenceid=2 2023-07-16 14:15:40,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:40,875 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=CLOSED 2023-07-16 14:15:40,875 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516940875"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516940875"}]},"ts":"1689516940875"} 2023-07-16 14:15:40,878 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-16 14:15:40,878 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,43741,1689516920562 in 162 msec 2023-07-16 14:15:40,878 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41933,1689516920766; forceNewPlan=false, retain=false 2023-07-16 14:15:41,028 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:41,029 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:41,029 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516941029"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516941029"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516941029"}]},"ts":"1689516941029"} 2023-07-16 14:15:41,031 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:41,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:41,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30a8ae896fd23311b4a9b3f859e17ea6, NAME => 'testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:41,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:41,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,189 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,190 DEBUG [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/tr 2023-07-16 14:15:41,190 DEBUG [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/tr 2023-07-16 14:15:41,190 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30a8ae896fd23311b4a9b3f859e17ea6 columnFamilyName tr 2023-07-16 14:15:41,191 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] regionserver.HStore(310): Store=30a8ae896fd23311b4a9b3f859e17ea6/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:41,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:41,196 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 30a8ae896fd23311b4a9b3f859e17ea6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11443150240, jitterRate=0.06572641432285309}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:41,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:41,197 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6., pid=122, masterSystemTime=1689516941183 2023-07-16 14:15:41,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:41,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:41,199 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:41,199 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516941199"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516941199"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516941199"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516941199"}]},"ts":"1689516941199"} 2023-07-16 14:15:41,202 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-16 14:15:41,202 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,41933,1689516920766 in 169 msec 2023-07-16 14:15:41,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE in 491 msec 2023-07-16 14:15:41,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-16 14:15:41,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-16 14:15:41,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:41,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:41,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:41,718 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:41,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 14:15:41,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:41,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 14:15:41,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:41,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 14:15:41,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:41,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:41,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:41,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-16 14:15:41,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:41,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:41,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:41,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:41,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:41,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:41,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:41,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:41,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43741] to rsgroup normal 2023-07-16 14:15:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:41,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:41,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:41,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 14:15:41,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43741,1689516920562] are moved back to default 2023-07-16 14:15:41,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-16 14:15:41,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:41,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:41,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:41,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 14:15:41,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:41,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:41,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-16 14:15:41,757 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:41,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-16 14:15:41,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-16 14:15:41,759 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:41,759 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:41,759 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:41,760 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:41,760 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:41,762 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:41,763 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:41,764 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d empty. 2023-07-16 14:15:41,764 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:41,765 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-16 14:15:41,779 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:41,780 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2aaf0ce709cf8e71a96440aaa2c8020d, NAME => 'unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 2aaf0ce709cf8e71a96440aaa2c8020d, disabling compactions & flushes 2023-07-16 14:15:41,791 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:41,791 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:41,792 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. after waiting 0 ms 2023-07-16 14:15:41,792 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:41,792 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:41,792 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:41,794 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:41,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516941794"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516941794"}]},"ts":"1689516941794"} 2023-07-16 14:15:41,796 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:41,796 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:41,797 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516941797"}]},"ts":"1689516941797"} 2023-07-16 14:15:41,798 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-16 14:15:41,802 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, ASSIGN}] 2023-07-16 14:15:41,803 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, ASSIGN 2023-07-16 14:15:41,804 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:41,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-16 14:15:41,956 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:41,956 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516941956"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516941956"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516941956"}]},"ts":"1689516941956"} 2023-07-16 14:15:41,958 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:42,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-16 14:15:42,114 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2aaf0ce709cf8e71a96440aaa2c8020d, NAME => 'unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:42,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:42,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,117 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,119 DEBUG [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/ut 2023-07-16 14:15:42,120 DEBUG [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/ut 2023-07-16 14:15:42,120 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2aaf0ce709cf8e71a96440aaa2c8020d columnFamilyName ut 2023-07-16 14:15:42,121 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] regionserver.HStore(310): Store=2aaf0ce709cf8e71a96440aaa2c8020d/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:42,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,122 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:42,130 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2aaf0ce709cf8e71a96440aaa2c8020d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10920588480, jitterRate=0.01705905795097351}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:42,130 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:42,131 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d., pid=125, masterSystemTime=1689516942110 2023-07-16 14:15:42,133 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,133 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,133 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:42,134 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516942133"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516942133"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516942133"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516942133"}]},"ts":"1689516942133"} 2023-07-16 14:15:42,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-16 14:15:42,139 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,44287,1689516924704 in 178 msec 2023-07-16 14:15:42,141 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-16 14:15:42,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, ASSIGN in 337 msec 2023-07-16 14:15:42,142 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:42,142 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516942142"}]},"ts":"1689516942142"} 2023-07-16 14:15:42,144 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-16 14:15:42,147 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:42,149 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 393 msec 2023-07-16 14:15:42,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-16 14:15:42,362 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-16 14:15:42,362 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-16 14:15:42,362 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:42,366 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-16 14:15:42,367 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:42,367 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-16 14:15:42,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-16 14:15:42,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 14:15:42,371 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:42,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:42,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:42,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:42,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-16 14:15:42,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 2aaf0ce709cf8e71a96440aaa2c8020d to RSGroup normal 2023-07-16 14:15:42,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE 2023-07-16 14:15:42,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-16 14:15:42,375 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE 2023-07-16 14:15:42,375 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:42,376 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516942375"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516942375"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516942375"}]},"ts":"1689516942375"} 2023-07-16 14:15:42,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:42,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2aaf0ce709cf8e71a96440aaa2c8020d, disabling compactions & flushes 2023-07-16 14:15:42,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. after waiting 0 ms 2023-07-16 14:15:42,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,536 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:42,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:42,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2aaf0ce709cf8e71a96440aaa2c8020d move to jenkins-hbase4.apache.org,43741,1689516920562 record at close sequenceid=2 2023-07-16 14:15:42,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,539 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=CLOSED 2023-07-16 14:15:42,539 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516942539"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516942539"}]},"ts":"1689516942539"} 2023-07-16 14:15:42,542 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-16 14:15:42,542 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,44287,1689516924704 in 163 msec 2023-07-16 14:15:42,543 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:42,693 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:42,694 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516942693"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516942693"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516942693"}]},"ts":"1689516942693"} 2023-07-16 14:15:42,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:42,854 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2aaf0ce709cf8e71a96440aaa2c8020d, NAME => 'unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:42,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:42,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,854 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,856 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,857 DEBUG [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/ut 2023-07-16 14:15:42,857 DEBUG [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/ut 2023-07-16 14:15:42,858 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2aaf0ce709cf8e71a96440aaa2c8020d columnFamilyName ut 2023-07-16 14:15:42,858 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] regionserver.HStore(310): Store=2aaf0ce709cf8e71a96440aaa2c8020d/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:42,859 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,863 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:42,864 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2aaf0ce709cf8e71a96440aaa2c8020d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11302123840, jitterRate=0.052592307329177856}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:42,864 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:42,865 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d., pid=128, masterSystemTime=1689516942848 2023-07-16 14:15:42,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,867 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:42,868 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:42,868 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516942868"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516942868"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516942868"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516942868"}]},"ts":"1689516942868"} 2023-07-16 14:15:42,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-16 14:15:42,871 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,43741,1689516920562 in 174 msec 2023-07-16 14:15:42,873 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE in 497 msec 2023-07-16 14:15:43,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-16 14:15:43,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-16 14:15:43,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:43,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:43,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:43,381 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:43,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 14:15:43,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:43,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 14:15:43,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 14:15:43,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:43,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-16 14:15:43,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:43,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:43,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:43,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-16 14:15:43,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-16 14:15:43,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:43,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:43,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-16 14:15:43,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:43,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 14:15:43,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:43,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 14:15:43,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:43,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:43,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-16 14:15:43,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:43,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:43,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:43,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:43,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:43,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-16 14:15:43,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 2aaf0ce709cf8e71a96440aaa2c8020d to RSGroup default 2023-07-16 14:15:43,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE 2023-07-16 14:15:43,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 14:15:43,416 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE 2023-07-16 14:15:43,416 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:43,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516943416"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516943416"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516943416"}]},"ts":"1689516943416"} 2023-07-16 14:15:43,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:43,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2aaf0ce709cf8e71a96440aaa2c8020d, disabling compactions & flushes 2023-07-16 14:15:43,573 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. after waiting 0 ms 2023-07-16 14:15:43,573 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:43,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,579 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:43,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2aaf0ce709cf8e71a96440aaa2c8020d move to jenkins-hbase4.apache.org,44287,1689516924704 record at close sequenceid=5 2023-07-16 14:15:43,580 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,581 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=CLOSED 2023-07-16 14:15:43,581 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516943581"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516943581"}]},"ts":"1689516943581"} 2023-07-16 14:15:43,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-16 14:15:43,584 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,43741,1689516920562 in 164 msec 2023-07-16 14:15:43,586 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:43,737 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:43,737 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516943737"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516943737"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516943737"}]},"ts":"1689516943737"} 2023-07-16 14:15:43,739 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:43,894 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,894 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2aaf0ce709cf8e71a96440aaa2c8020d, NAME => 'unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:43,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:43,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,896 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,897 DEBUG [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/ut 2023-07-16 14:15:43,897 DEBUG [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/ut 2023-07-16 14:15:43,898 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2aaf0ce709cf8e71a96440aaa2c8020d columnFamilyName ut 2023-07-16 14:15:43,898 INFO [StoreOpener-2aaf0ce709cf8e71a96440aaa2c8020d-1] regionserver.HStore(310): Store=2aaf0ce709cf8e71a96440aaa2c8020d/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:43,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,907 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:43,908 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2aaf0ce709cf8e71a96440aaa2c8020d; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9756706080, jitterRate=-0.09133593738079071}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:43,908 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:43,909 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d., pid=131, masterSystemTime=1689516943890 2023-07-16 14:15:43,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:43,912 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=2aaf0ce709cf8e71a96440aaa2c8020d, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:43,912 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689516943911"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516943911"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516943911"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516943911"}]},"ts":"1689516943911"} 2023-07-16 14:15:43,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-16 14:15:43,915 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 2aaf0ce709cf8e71a96440aaa2c8020d, server=jenkins-hbase4.apache.org,44287,1689516924704 in 174 msec 2023-07-16 14:15:43,916 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=2aaf0ce709cf8e71a96440aaa2c8020d, REOPEN/MOVE in 500 msec 2023-07-16 14:15:44,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-16 14:15:44,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-16 14:15:44,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:44,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43741] to rsgroup default 2023-07-16 14:15:44,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 14:15:44,420 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:44,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:44,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:44,421 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:44,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-16 14:15:44,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43741,1689516920562] are moved back to normal 2023-07-16 14:15:44,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-16 14:15:44,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:44,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-16 14:15:44,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:44,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:44,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:44,428 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 14:15:44,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:44,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:44,431 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:44,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:44,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:44,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:44,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:44,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:44,435 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:44,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:44,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:44,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-16 14:15:44,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:44,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:44,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:44,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-16 14:15:44,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(345): Moving region 30a8ae896fd23311b4a9b3f859e17ea6 to RSGroup default 2023-07-16 14:15:44,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE 2023-07-16 14:15:44,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 14:15:44,444 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE 2023-07-16 14:15:44,445 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:44,445 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516944445"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516944445"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516944445"}]},"ts":"1689516944445"} 2023-07-16 14:15:44,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,41933,1689516920766}] 2023-07-16 14:15:44,599 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 30a8ae896fd23311b4a9b3f859e17ea6, disabling compactions & flushes 2023-07-16 14:15:44,601 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. after waiting 0 ms 2023-07-16 14:15:44,601 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,608 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 14:15:44,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:44,609 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 30a8ae896fd23311b4a9b3f859e17ea6 move to jenkins-hbase4.apache.org,43741,1689516920562 record at close sequenceid=5 2023-07-16 14:15:44,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,612 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=CLOSED 2023-07-16 14:15:44,612 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516944612"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516944612"}]},"ts":"1689516944612"} 2023-07-16 14:15:44,615 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-16 14:15:44,615 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,41933,1689516920766 in 167 msec 2023-07-16 14:15:44,616 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:44,651 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 14:15:44,766 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:44,766 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:44,766 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516944766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516944766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516944766"}]},"ts":"1689516944766"} 2023-07-16 14:15:44,768 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:44,924 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 30a8ae896fd23311b4a9b3f859e17ea6, NAME => 'testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,924 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:44,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,926 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,927 DEBUG [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/tr 2023-07-16 14:15:44,927 DEBUG [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/tr 2023-07-16 14:15:44,927 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 30a8ae896fd23311b4a9b3f859e17ea6 columnFamilyName tr 2023-07-16 14:15:44,928 INFO [StoreOpener-30a8ae896fd23311b4a9b3f859e17ea6-1] regionserver.HStore(310): Store=30a8ae896fd23311b4a9b3f859e17ea6/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:44,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:44,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 30a8ae896fd23311b4a9b3f859e17ea6; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11474087520, jitterRate=0.06860767304897308}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:44,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:44,934 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6., pid=134, masterSystemTime=1689516944920 2023-07-16 14:15:44,935 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,936 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:44,936 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=30a8ae896fd23311b4a9b3f859e17ea6, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:44,936 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689516944936"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516944936"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516944936"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516944936"}]},"ts":"1689516944936"} 2023-07-16 14:15:44,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-16 14:15:44,939 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 30a8ae896fd23311b4a9b3f859e17ea6, server=jenkins-hbase4.apache.org,43741,1689516920562 in 169 msec 2023-07-16 14:15:44,940 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=30a8ae896fd23311b4a9b3f859e17ea6, REOPEN/MOVE in 495 msec 2023-07-16 14:15:45,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-16 14:15:45,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-16 14:15:45,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:45,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup default 2023-07-16 14:15:45,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 14:15:45,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-16 14:15:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to newgroup 2023-07-16 14:15:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-16 14:15:45,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:45,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-16 14:15:45,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:45,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:45,459 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:45,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:45,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:45,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:45,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:45,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518145473, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:45,474 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:45,475 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:45,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,476 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:45,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:45,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,495 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=510 (was 515), OpenFileDescriptor=774 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 509), ProcessCount=175 (was 176), AvailableMemoryMB=2414 (was 2423) 2023-07-16 14:15:45,495 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 14:15:45,511 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=510, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=175, AvailableMemoryMB=2413 2023-07-16 14:15:45,511 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-16 14:15:45,511 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-16 14:15:45,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:45,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:45,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:45,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:45,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:45,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:45,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:45,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:45,525 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:45,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:45,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:45,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:45,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:45,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518145535, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:45,535 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:45,537 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:45,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,538 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,538 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:45,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:45,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-16 14:15:45,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:45,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-16 14:15:45,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-16 14:15:45,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-16 14:15:45,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-16 14:15:45,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:59606 deadline: 1689518145547, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-16 14:15:45,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-16 14:15:45,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,549 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:59606 deadline: 1689518145549, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 14:15:45,555 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-16 14:15:45,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-16 14:15:45,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-16 14:15:45,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:59606 deadline: 1689518145560, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 14:15:45,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:45,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:45,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:45,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:45,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:45,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:45,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:45,571 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:45,574 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:45,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:45,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,577 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:45,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:45,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:45,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518145584, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:45,588 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:45,589 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:45,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,590 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:45,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:45,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,610 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=514 (was 510) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5d23f00a-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=774 (was 774), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 468), ProcessCount=175 (was 175), AvailableMemoryMB=2413 (was 2413) 2023-07-16 14:15:45,610 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 14:15:45,630 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514, OpenFileDescriptor=774, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=175, AvailableMemoryMB=2413 2023-07-16 14:15:45,630 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 14:15:45,630 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-16 14:15:45,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:45,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:45,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:45,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:45,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:45,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:45,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:45,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:45,650 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:45,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:45,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:45,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:45,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,666 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:45,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:45,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518145670, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:45,671 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:45,673 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:45,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,674 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:45,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:45,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:45,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:45,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:45,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:45,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 14:15:45,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to default 2023-07-16 14:15:45,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,700 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:45,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:45,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:45,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:45,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:45,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:45,713 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:45,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-16 14:15:45,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-16 14:15:45,715 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:45,716 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1489682713 2023-07-16 14:15:45,716 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:45,717 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:45,719 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:45,725 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:45,725 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:45,725 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:45,725 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:45,725 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:45,726 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 empty. 2023-07-16 14:15:45,726 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b empty. 2023-07-16 14:15:45,726 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db empty. 2023-07-16 14:15:45,726 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 empty. 2023-07-16 14:15:45,726 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 empty. 2023-07-16 14:15:45,727 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:45,727 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:45,727 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:45,727 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:45,727 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:45,727 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 14:15:45,756 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:45,759 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 82b66eabbc046f5b4c570c9f96bb21b8, NAME => 'Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:45,763 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8590dc13d0a02b5c406afa04da38d4db, NAME => 'Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:45,767 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => c9d7e12657e907c2a27ad033b130e697, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:45,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-16 14:15:45,826 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:45,826 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 82b66eabbc046f5b4c570c9f96bb21b8, disabling compactions & flushes 2023-07-16 14:15:45,827 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:45,827 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:45,827 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. after waiting 0 ms 2023-07-16 14:15:45,827 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:45,827 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:45,827 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 82b66eabbc046f5b4c570c9f96bb21b8: 2023-07-16 14:15:45,827 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49773d1faa70baaa6bb6cd577c68667b, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:45,828 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:45,828 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 8590dc13d0a02b5c406afa04da38d4db, disabling compactions & flushes 2023-07-16 14:15:45,828 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:45,828 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:45,828 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. after waiting 0 ms 2023-07-16 14:15:45,828 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:45,828 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:45,828 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 8590dc13d0a02b5c406afa04da38d4db: 2023-07-16 14:15:45,828 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 4d1cd2602daf1cdbe91e29266359dc18, NAME => 'Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp 2023-07-16 14:15:45,829 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:45,829 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing c9d7e12657e907c2a27ad033b130e697, disabling compactions & flushes 2023-07-16 14:15:45,829 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:45,829 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:45,829 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. after waiting 0 ms 2023-07-16 14:15:45,829 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:45,829 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:45,829 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for c9d7e12657e907c2a27ad033b130e697: 2023-07-16 14:15:45,855 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:45,855 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 49773d1faa70baaa6bb6cd577c68667b, disabling compactions & flushes 2023-07-16 14:15:45,855 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:45,855 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:45,855 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. after waiting 0 ms 2023-07-16 14:15:45,855 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:45,855 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:45,855 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 49773d1faa70baaa6bb6cd577c68667b: 2023-07-16 14:15:45,861 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:45,861 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 4d1cd2602daf1cdbe91e29266359dc18, disabling compactions & flushes 2023-07-16 14:15:45,862 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:45,862 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:45,862 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. after waiting 0 ms 2023-07-16 14:15:45,862 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:45,862 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:45,862 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 4d1cd2602daf1cdbe91e29266359dc18: 2023-07-16 14:15:45,865 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:45,866 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516945865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516945865"}]},"ts":"1689516945865"} 2023-07-16 14:15:45,866 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516945865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516945865"}]},"ts":"1689516945865"} 2023-07-16 14:15:45,866 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516945865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516945865"}]},"ts":"1689516945865"} 2023-07-16 14:15:45,866 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516945865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516945865"}]},"ts":"1689516945865"} 2023-07-16 14:15:45,866 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516945865"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516945865"}]},"ts":"1689516945865"} 2023-07-16 14:15:45,868 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 14:15:45,869 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:45,869 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516945869"}]},"ts":"1689516945869"} 2023-07-16 14:15:45,871 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-16 14:15:45,875 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:45,875 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:45,875 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:45,875 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:45,876 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, ASSIGN}] 2023-07-16 14:15:45,881 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, ASSIGN 2023-07-16 14:15:45,881 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, ASSIGN 2023-07-16 14:15:45,881 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, ASSIGN 2023-07-16 14:15:45,881 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, ASSIGN 2023-07-16 14:15:45,882 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:45,882 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:45,882 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44287,1689516924704; forceNewPlan=false, retain=false 2023-07-16 14:15:45,882 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:45,883 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, ASSIGN 2023-07-16 14:15:45,884 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43741,1689516920562; forceNewPlan=false, retain=false 2023-07-16 14:15:46,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-16 14:15:46,033 INFO [jenkins-hbase4:41971] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 14:15:46,037 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=49773d1faa70baaa6bb6cd577c68667b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,038 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946037"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946037"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946037"}]},"ts":"1689516946037"} 2023-07-16 14:15:46,038 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8590dc13d0a02b5c406afa04da38d4db, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,038 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=4d1cd2602daf1cdbe91e29266359dc18, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,038 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946038"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946038"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946038"}]},"ts":"1689516946038"} 2023-07-16 14:15:46,038 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946038"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946038"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946038"}]},"ts":"1689516946038"} 2023-07-16 14:15:46,039 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=c9d7e12657e907c2a27ad033b130e697, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,039 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946038"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946038"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946038"}]},"ts":"1689516946038"} 2023-07-16 14:15:46,039 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=82b66eabbc046f5b4c570c9f96bb21b8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,039 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946039"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946039"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946039"}]},"ts":"1689516946039"} 2023-07-16 14:15:46,040 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=139, state=RUNNABLE; OpenRegionProcedure 49773d1faa70baaa6bb6cd577c68667b, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:46,041 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; OpenRegionProcedure 8590dc13d0a02b5c406afa04da38d4db, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:46,044 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=140, state=RUNNABLE; OpenRegionProcedure 4d1cd2602daf1cdbe91e29266359dc18, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:46,045 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=138, state=RUNNABLE; OpenRegionProcedure c9d7e12657e907c2a27ad033b130e697, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:46,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=136, state=RUNNABLE; OpenRegionProcedure 82b66eabbc046f5b4c570c9f96bb21b8, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:46,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8590dc13d0a02b5c406afa04da38d4db, NAME => 'Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 14:15:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:46,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,200 INFO [StoreOpener-8590dc13d0a02b5c406afa04da38d4db-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,202 DEBUG [StoreOpener-8590dc13d0a02b5c406afa04da38d4db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/f 2023-07-16 14:15:46,202 DEBUG [StoreOpener-8590dc13d0a02b5c406afa04da38d4db-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/f 2023-07-16 14:15:46,202 INFO [StoreOpener-8590dc13d0a02b5c406afa04da38d4db-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8590dc13d0a02b5c406afa04da38d4db columnFamilyName f 2023-07-16 14:15:46,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 82b66eabbc046f5b4c570c9f96bb21b8, NAME => 'Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 14:15:46,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:46,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,204 INFO [StoreOpener-8590dc13d0a02b5c406afa04da38d4db-1] regionserver.HStore(310): Store=8590dc13d0a02b5c406afa04da38d4db/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:46,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,205 INFO [StoreOpener-82b66eabbc046f5b4c570c9f96bb21b8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,207 DEBUG [StoreOpener-82b66eabbc046f5b4c570c9f96bb21b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/f 2023-07-16 14:15:46,207 DEBUG [StoreOpener-82b66eabbc046f5b4c570c9f96bb21b8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/f 2023-07-16 14:15:46,207 INFO [StoreOpener-82b66eabbc046f5b4c570c9f96bb21b8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 82b66eabbc046f5b4c570c9f96bb21b8 columnFamilyName f 2023-07-16 14:15:46,207 INFO [StoreOpener-82b66eabbc046f5b4c570c9f96bb21b8-1] regionserver.HStore(310): Store=82b66eabbc046f5b4c570c9f96bb21b8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:46,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8590dc13d0a02b5c406afa04da38d4db; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9645735200, jitterRate=-0.10167090594768524}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:46,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8590dc13d0a02b5c406afa04da38d4db: 2023-07-16 14:15:46,213 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db., pid=142, masterSystemTime=1689516946193 2023-07-16 14:15:46,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:46,215 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 82b66eabbc046f5b4c570c9f96bb21b8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10101543360, jitterRate=-0.059220463037490845}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:46,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 82b66eabbc046f5b4c570c9f96bb21b8: 2023-07-16 14:15:46,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,216 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49773d1faa70baaa6bb6cd577c68667b, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 14:15:46,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:46,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,217 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8., pid=145, masterSystemTime=1689516946198 2023-07-16 14:15:46,217 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,217 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8590dc13d0a02b5c406afa04da38d4db, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,217 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946217"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516946217"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516946217"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516946217"}]},"ts":"1689516946217"} 2023-07-16 14:15:46,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4d1cd2602daf1cdbe91e29266359dc18, NAME => 'Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 14:15:46,219 INFO [StoreOpener-49773d1faa70baaa6bb6cd577c68667b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:46,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,221 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=82b66eabbc046f5b4c570c9f96bb21b8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,221 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946220"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516946220"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516946220"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516946220"}]},"ts":"1689516946220"} 2023-07-16 14:15:46,221 DEBUG [StoreOpener-49773d1faa70baaa6bb6cd577c68667b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/f 2023-07-16 14:15:46,222 DEBUG [StoreOpener-49773d1faa70baaa6bb6cd577c68667b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/f 2023-07-16 14:15:46,222 INFO [StoreOpener-49773d1faa70baaa6bb6cd577c68667b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49773d1faa70baaa6bb6cd577c68667b columnFamilyName f 2023-07-16 14:15:46,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-16 14:15:46,222 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; OpenRegionProcedure 8590dc13d0a02b5c406afa04da38d4db, server=jenkins-hbase4.apache.org,44287,1689516924704 in 178 msec 2023-07-16 14:15:46,223 INFO [StoreOpener-49773d1faa70baaa6bb6cd577c68667b-1] regionserver.HStore(310): Store=49773d1faa70baaa6bb6cd577c68667b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:46,224 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, ASSIGN in 348 msec 2023-07-16 14:15:46,224 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=136 2023-07-16 14:15:46,224 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=136, state=SUCCESS; OpenRegionProcedure 82b66eabbc046f5b4c570c9f96bb21b8, server=jenkins-hbase4.apache.org,43741,1689516920562 in 174 msec 2023-07-16 14:15:46,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, ASSIGN in 349 msec 2023-07-16 14:15:46,227 INFO [StoreOpener-4d1cd2602daf1cdbe91e29266359dc18-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,228 DEBUG [StoreOpener-4d1cd2602daf1cdbe91e29266359dc18-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/f 2023-07-16 14:15:46,228 DEBUG [StoreOpener-4d1cd2602daf1cdbe91e29266359dc18-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/f 2023-07-16 14:15:46,228 INFO [StoreOpener-4d1cd2602daf1cdbe91e29266359dc18-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4d1cd2602daf1cdbe91e29266359dc18 columnFamilyName f 2023-07-16 14:15:46,229 INFO [StoreOpener-4d1cd2602daf1cdbe91e29266359dc18-1] regionserver.HStore(310): Store=4d1cd2602daf1cdbe91e29266359dc18/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:46,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:46,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49773d1faa70baaa6bb6cd577c68667b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10446598880, jitterRate=-0.02708466351032257}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:46,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:46,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49773d1faa70baaa6bb6cd577c68667b: 2023-07-16 14:15:46,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4d1cd2602daf1cdbe91e29266359dc18; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10315445120, jitterRate=-0.03929930925369263}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:46,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4d1cd2602daf1cdbe91e29266359dc18: 2023-07-16 14:15:46,236 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b., pid=141, masterSystemTime=1689516946193 2023-07-16 14:15:46,237 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18., pid=143, masterSystemTime=1689516946198 2023-07-16 14:15:46,238 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,238 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,239 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=49773d1faa70baaa6bb6cd577c68667b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,239 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,239 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946239"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516946239"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516946239"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516946239"}]},"ts":"1689516946239"} 2023-07-16 14:15:46,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c9d7e12657e907c2a27ad033b130e697, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 14:15:46,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:46,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,243 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=4d1cd2602daf1cdbe91e29266359dc18, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,243 INFO [StoreOpener-c9d7e12657e907c2a27ad033b130e697-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,243 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946243"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516946243"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516946243"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516946243"}]},"ts":"1689516946243"} 2023-07-16 14:15:46,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=139 2023-07-16 14:15:46,247 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=139, state=SUCCESS; OpenRegionProcedure 49773d1faa70baaa6bb6cd577c68667b, server=jenkins-hbase4.apache.org,44287,1689516924704 in 201 msec 2023-07-16 14:15:46,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=140 2023-07-16 14:15:46,248 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, ASSIGN in 372 msec 2023-07-16 14:15:46,248 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; OpenRegionProcedure 4d1cd2602daf1cdbe91e29266359dc18, server=jenkins-hbase4.apache.org,43741,1689516920562 in 200 msec 2023-07-16 14:15:46,249 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, ASSIGN in 373 msec 2023-07-16 14:15:46,250 DEBUG [StoreOpener-c9d7e12657e907c2a27ad033b130e697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/f 2023-07-16 14:15:46,250 DEBUG [StoreOpener-c9d7e12657e907c2a27ad033b130e697-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/f 2023-07-16 14:15:46,251 INFO [StoreOpener-c9d7e12657e907c2a27ad033b130e697-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c9d7e12657e907c2a27ad033b130e697 columnFamilyName f 2023-07-16 14:15:46,252 INFO [StoreOpener-c9d7e12657e907c2a27ad033b130e697-1] regionserver.HStore(310): Store=c9d7e12657e907c2a27ad033b130e697/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:46,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:46,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c9d7e12657e907c2a27ad033b130e697; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10056834240, jitterRate=-0.06338432431221008}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:46,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c9d7e12657e907c2a27ad033b130e697: 2023-07-16 14:15:46,262 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697., pid=144, masterSystemTime=1689516946198 2023-07-16 14:15:46,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,264 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=c9d7e12657e907c2a27ad033b130e697, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,265 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946264"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516946264"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516946264"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516946264"}]},"ts":"1689516946264"} 2023-07-16 14:15:46,269 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=138 2023-07-16 14:15:46,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=138, state=SUCCESS; OpenRegionProcedure c9d7e12657e907c2a27ad033b130e697, server=jenkins-hbase4.apache.org,43741,1689516920562 in 221 msec 2023-07-16 14:15:46,271 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=135 2023-07-16 14:15:46,271 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, ASSIGN in 394 msec 2023-07-16 14:15:46,272 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:46,272 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516946272"}]},"ts":"1689516946272"} 2023-07-16 14:15:46,274 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-16 14:15:46,276 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:46,282 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 566 msec 2023-07-16 14:15:46,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-16 14:15:46,326 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-16 14:15:46,326 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-16 14:15:46,326 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:46,345 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-16 14:15:46,345 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:46,345 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-16 14:15:46,346 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:46,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 14:15:46,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:46,355 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 14:15:46,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 14:15:46,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-16 14:15:46,360 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516946360"}]},"ts":"1689516946360"} 2023-07-16 14:15:46,362 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-16 14:15:46,363 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-16 14:15:46,364 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, UNASSIGN}] 2023-07-16 14:15:46,366 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, UNASSIGN 2023-07-16 14:15:46,367 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=49773d1faa70baaa6bb6cd577c68667b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,367 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946367"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946367"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946367"}]},"ts":"1689516946367"} 2023-07-16 14:15:46,368 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, UNASSIGN 2023-07-16 14:15:46,368 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, UNASSIGN 2023-07-16 14:15:46,369 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, UNASSIGN 2023-07-16 14:15:46,369 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, UNASSIGN 2023-07-16 14:15:46,369 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=150, state=RUNNABLE; CloseRegionProcedure 49773d1faa70baaa6bb6cd577c68667b, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:46,370 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=4d1cd2602daf1cdbe91e29266359dc18, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,370 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946370"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946370"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946370"}]},"ts":"1689516946370"} 2023-07-16 14:15:46,372 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=151, state=RUNNABLE; CloseRegionProcedure 4d1cd2602daf1cdbe91e29266359dc18, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:46,374 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8590dc13d0a02b5c406afa04da38d4db, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,374 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=82b66eabbc046f5b4c570c9f96bb21b8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,374 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=c9d7e12657e907c2a27ad033b130e697, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,374 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946374"}]},"ts":"1689516946374"} 2023-07-16 14:15:46,374 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946374"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946374"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946374"}]},"ts":"1689516946374"} 2023-07-16 14:15:46,374 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946373"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516946373"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516946373"}]},"ts":"1689516946373"} 2023-07-16 14:15:46,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=147, state=RUNNABLE; CloseRegionProcedure 82b66eabbc046f5b4c570c9f96bb21b8, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:46,376 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=149, state=RUNNABLE; CloseRegionProcedure c9d7e12657e907c2a27ad033b130e697, server=jenkins-hbase4.apache.org,43741,1689516920562}] 2023-07-16 14:15:46,377 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=148, state=RUNNABLE; CloseRegionProcedure 8590dc13d0a02b5c406afa04da38d4db, server=jenkins-hbase4.apache.org,44287,1689516924704}] 2023-07-16 14:15:46,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-16 14:15:46,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,530 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,531 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c9d7e12657e907c2a27ad033b130e697, disabling compactions & flushes 2023-07-16 14:15:46,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8590dc13d0a02b5c406afa04da38d4db, disabling compactions & flushes 2023-07-16 14:15:46,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,532 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,532 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. after waiting 0 ms 2023-07-16 14:15:46,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. after waiting 0 ms 2023-07-16 14:15:46,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:46,537 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:46,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697. 2023-07-16 14:15:46,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c9d7e12657e907c2a27ad033b130e697: 2023-07-16 14:15:46,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db. 2023-07-16 14:15:46,538 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8590dc13d0a02b5c406afa04da38d4db: 2023-07-16 14:15:46,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4d1cd2602daf1cdbe91e29266359dc18, disabling compactions & flushes 2023-07-16 14:15:46,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. after waiting 0 ms 2023-07-16 14:15:46,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,543 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=c9d7e12657e907c2a27ad033b130e697, regionState=CLOSED 2023-07-16 14:15:46,543 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946543"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516946543"}]},"ts":"1689516946543"} 2023-07-16 14:15:46,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49773d1faa70baaa6bb6cd577c68667b, disabling compactions & flushes 2023-07-16 14:15:46,544 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. after waiting 0 ms 2023-07-16 14:15:46,544 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,545 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8590dc13d0a02b5c406afa04da38d4db, regionState=CLOSED 2023-07-16 14:15:46,545 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946544"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516946544"}]},"ts":"1689516946544"} 2023-07-16 14:15:46,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:46,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18. 2023-07-16 14:15:46,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4d1cd2602daf1cdbe91e29266359dc18: 2023-07-16 14:15:46,555 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:46,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 82b66eabbc046f5b4c570c9f96bb21b8, disabling compactions & flushes 2023-07-16 14:15:46,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b. 2023-07-16 14:15:46,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. after waiting 0 ms 2023-07-16 14:15:46,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=149 2023-07-16 14:15:46,558 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=149, state=SUCCESS; CloseRegionProcedure c9d7e12657e907c2a27ad033b130e697, server=jenkins-hbase4.apache.org,43741,1689516920562 in 170 msec 2023-07-16 14:15:46,558 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=4d1cd2602daf1cdbe91e29266359dc18, regionState=CLOSED 2023-07-16 14:15:46,558 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946558"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516946558"}]},"ts":"1689516946558"} 2023-07-16 14:15:46,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49773d1faa70baaa6bb6cd577c68667b: 2023-07-16 14:15:46,559 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=148 2023-07-16 14:15:46,559 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=148, state=SUCCESS; CloseRegionProcedure 8590dc13d0a02b5c406afa04da38d4db, server=jenkins-hbase4.apache.org,44287,1689516924704 in 170 msec 2023-07-16 14:15:46,560 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=c9d7e12657e907c2a27ad033b130e697, UNASSIGN in 194 msec 2023-07-16 14:15:46,560 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8590dc13d0a02b5c406afa04da38d4db, UNASSIGN in 195 msec 2023-07-16 14:15:46,562 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=49773d1faa70baaa6bb6cd577c68667b, regionState=CLOSED 2023-07-16 14:15:46,562 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689516946562"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516946562"}]},"ts":"1689516946562"} 2023-07-16 14:15:46,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=151 2023-07-16 14:15:46,563 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=151, state=SUCCESS; CloseRegionProcedure 4d1cd2602daf1cdbe91e29266359dc18, server=jenkins-hbase4.apache.org,43741,1689516920562 in 188 msec 2023-07-16 14:15:46,565 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=4d1cd2602daf1cdbe91e29266359dc18, UNASSIGN in 199 msec 2023-07-16 14:15:46,566 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=150 2023-07-16 14:15:46,566 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=150, state=SUCCESS; CloseRegionProcedure 49773d1faa70baaa6bb6cd577c68667b, server=jenkins-hbase4.apache.org,44287,1689516924704 in 195 msec 2023-07-16 14:15:46,566 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:46,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8. 2023-07-16 14:15:46,567 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 82b66eabbc046f5b4c570c9f96bb21b8: 2023-07-16 14:15:46,568 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=49773d1faa70baaa6bb6cd577c68667b, UNASSIGN in 202 msec 2023-07-16 14:15:46,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,569 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=82b66eabbc046f5b4c570c9f96bb21b8, regionState=CLOSED 2023-07-16 14:15:46,570 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689516946569"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516946569"}]},"ts":"1689516946569"} 2023-07-16 14:15:46,572 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=147 2023-07-16 14:15:46,572 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=147, state=SUCCESS; CloseRegionProcedure 82b66eabbc046f5b4c570c9f96bb21b8, server=jenkins-hbase4.apache.org,43741,1689516920562 in 196 msec 2023-07-16 14:15:46,575 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=146 2023-07-16 14:15:46,575 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=82b66eabbc046f5b4c570c9f96bb21b8, UNASSIGN in 208 msec 2023-07-16 14:15:46,576 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516946576"}]},"ts":"1689516946576"} 2023-07-16 14:15:46,579 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-16 14:15:46,581 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-16 14:15:46,585 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 227 msec 2023-07-16 14:15:46,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-16 14:15:46,662 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-16 14:15:46,663 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:46,670 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:46,671 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-16 14:15:46,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1489682713, current retry=0 2023-07-16 14:15:46,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1489682713. 2023-07-16 14:15:46,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:46,677 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 14:15:46,680 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:46,682 INFO [Listener at localhost/36419] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 14:15:46,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 14:15:46,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:46,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 87 connection: 172.31.14.131:59606 deadline: 1689517006682, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-16 14:15:46,684 DEBUG [Listener at localhost/36419] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-16 14:15:46,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-16 14:15:46,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,688 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1489682713' 2023-07-16 14:15:46,688 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:46,692 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:46,697 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,697 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-16 14:15:46,697 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,697 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,697 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,700 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/recovered.edits] 2023-07-16 14:15:46,700 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/recovered.edits] 2023-07-16 14:15:46,702 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/recovered.edits] 2023-07-16 14:15:46,702 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/recovered.edits] 2023-07-16 14:15:46,702 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/f, FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/recovered.edits] 2023-07-16 14:15:46,712 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697/recovered.edits/4.seqid 2023-07-16 14:15:46,712 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18/recovered.edits/4.seqid 2023-07-16 14:15:46,713 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/c9d7e12657e907c2a27ad033b130e697 2023-07-16 14:15:46,714 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/4d1cd2602daf1cdbe91e29266359dc18 2023-07-16 14:15:46,714 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8/recovered.edits/4.seqid 2023-07-16 14:15:46,714 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db/recovered.edits/4.seqid 2023-07-16 14:15:46,714 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/82b66eabbc046f5b4c570c9f96bb21b8 2023-07-16 14:15:46,715 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/recovered.edits/4.seqid to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/archive/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b/recovered.edits/4.seqid 2023-07-16 14:15:46,715 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/8590dc13d0a02b5c406afa04da38d4db 2023-07-16 14:15:46,715 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/.tmp/data/default/Group_testDisabledTableMove/49773d1faa70baaa6bb6cd577c68667b 2023-07-16 14:15:46,716 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 14:15:46,718 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,721 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-16 14:15:46,727 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-16 14:15:46,728 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,728 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-16 14:15:46,728 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516946728"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:46,728 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516946728"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:46,729 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516946728"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:46,729 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516946728"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:46,729 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516946728"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:46,732 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 14:15:46,733 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 82b66eabbc046f5b4c570c9f96bb21b8, NAME => 'Group_testDisabledTableMove,,1689516945710.82b66eabbc046f5b4c570c9f96bb21b8.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8590dc13d0a02b5c406afa04da38d4db, NAME => 'Group_testDisabledTableMove,aaaaa,1689516945710.8590dc13d0a02b5c406afa04da38d4db.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => c9d7e12657e907c2a27ad033b130e697, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689516945710.c9d7e12657e907c2a27ad033b130e697.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 49773d1faa70baaa6bb6cd577c68667b, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689516945710.49773d1faa70baaa6bb6cd577c68667b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 4d1cd2602daf1cdbe91e29266359dc18, NAME => 'Group_testDisabledTableMove,zzzzz,1689516945710.4d1cd2602daf1cdbe91e29266359dc18.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 14:15:46,733 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-16 14:15:46,733 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516946733"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:46,735 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-16 14:15:46,738 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 14:15:46,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 53 msec 2023-07-16 14:15:46,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-16 14:15:46,799 INFO [Listener at localhost/36419] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-16 14:15:46,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:46,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:46,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:46,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933] to rsgroup default 2023-07-16 14:15:46,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:46,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:46,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1489682713, current retry=0 2023-07-16 14:15:46,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34921,1689516920700, jenkins-hbase4.apache.org,41933,1689516920766] are moved back to Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1489682713 => default 2023-07-16 14:15:46,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:46,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1489682713 2023-07-16 14:15:46,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,812 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:46,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:46,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:46,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:46,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:46,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:46,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:46,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:46,816 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:46,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:46,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:46,824 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:46,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:46,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:46,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:46,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:46,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:46,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:46,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518146833, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:46,834 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:46,836 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:46,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,837 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:46,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:46,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:46,861 INFO [Listener at localhost/36419] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=515 (was 514) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_782818926_17 at /127.0.0.1:42122 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_611249476_17 at /127.0.0.1:51428 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a7dbe2d-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cec13a7-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=797 (was 774) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=468 (was 468), ProcessCount=175 (was 175), AvailableMemoryMB=2312 (was 2413) 2023-07-16 14:15:46,861 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-16 14:15:46,882 INFO [Listener at localhost/36419] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=515, OpenFileDescriptor=797, MaxFileDescriptor=60000, SystemLoadAverage=468, ProcessCount=175, AvailableMemoryMB=2311 2023-07-16 14:15:46,882 WARN [Listener at localhost/36419] hbase.ResourceChecker(130): Thread=515 is superior to 500 2023-07-16 14:15:46,882 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-16 14:15:46,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:46,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:46,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:46,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:46,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:46,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:46,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:46,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:46,900 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:46,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:46,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:46,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:46,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:46,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:46,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,916 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:41971] to rsgroup master 2023-07-16 14:15:46,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:46,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:59606 deadline: 1689518146916, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. 2023-07-16 14:15:46,917 WARN [Listener at localhost/36419] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:41971 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:46,918 INFO [Listener at localhost/36419] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:46,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:46,919 INFO [Listener at localhost/36419] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34921, jenkins-hbase4.apache.org:41933, jenkins-hbase4.apache.org:43741, jenkins-hbase4.apache.org:44287], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:46,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:46,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41971] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:46,920 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 14:15:46,921 INFO [Listener at localhost/36419] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 14:15:46,921 DEBUG [Listener at localhost/36419] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x47d8ccec to 127.0.0.1:63627 2023-07-16 14:15:46,921 DEBUG [Listener at localhost/36419] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:46,923 DEBUG [Listener at localhost/36419] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 14:15:46,923 DEBUG [Listener at localhost/36419] util.JVMClusterUtil(257): Found active master hash=1742323382, stopped=false 2023-07-16 14:15:46,923 DEBUG [Listener at localhost/36419] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 14:15:46,923 DEBUG [Listener at localhost/36419] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 14:15:46,923 INFO [Listener at localhost/36419] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:46,925 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:46,925 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:46,925 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:46,925 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:46,925 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:46,925 INFO [Listener at localhost/36419] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 14:15:46,925 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:46,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:46,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:46,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:46,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:46,926 DEBUG [Listener at localhost/36419] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x041f5a35 to 127.0.0.1:63627 2023-07-16 14:15:46,926 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:46,926 DEBUG [Listener at localhost/36419] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43741,1689516920562' ***** 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34921,1689516920700' ***** 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:46,927 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41933,1689516920766' ***** 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44287,1689516924704' ***** 2023-07-16 14:15:46,927 INFO [Listener at localhost/36419] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:46,928 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:46,928 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:46,927 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:46,942 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-16 14:15:46,942 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-16 14:15:46,949 INFO [RS:0;jenkins-hbase4:43741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5c09b86c{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:46,949 INFO [RS:2;jenkins-hbase4:41933] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@188ba91{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:46,949 INFO [RS:3;jenkins-hbase4:44287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@78c1ca58{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:46,949 INFO [RS:1;jenkins-hbase4:34921] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@21d37555{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:46,955 INFO [RS:3;jenkins-hbase4:44287] server.AbstractConnector(383): Stopped ServerConnector@77e30ea5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:46,955 INFO [RS:1;jenkins-hbase4:34921] server.AbstractConnector(383): Stopped ServerConnector@74f1938f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:46,955 INFO [RS:3;jenkins-hbase4:44287] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:46,955 INFO [RS:0;jenkins-hbase4:43741] server.AbstractConnector(383): Stopped ServerConnector@600533b4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:46,955 INFO [RS:2;jenkins-hbase4:41933] server.AbstractConnector(383): Stopped ServerConnector@532a3d3{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:46,955 INFO [RS:0;jenkins-hbase4:43741] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:46,955 INFO [RS:1;jenkins-hbase4:34921] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:46,956 INFO [RS:2;jenkins-hbase4:41933] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:46,957 INFO [RS:0;jenkins-hbase4:43741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3063b687{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:46,958 INFO [RS:1;jenkins-hbase4:34921] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33aee7d7{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:46,959 INFO [RS:0;jenkins-hbase4:43741] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3b1f63f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:46,956 INFO [RS:3;jenkins-hbase4:44287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1755bd06{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:46,959 INFO [RS:1;jenkins-hbase4:34921] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@13dfbe27{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:46,960 INFO [RS:3;jenkins-hbase4:44287] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@623affc3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:46,959 INFO [RS:2;jenkins-hbase4:41933] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@ac4d373{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:46,961 INFO [RS:2;jenkins-hbase4:41933] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ad75e86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:46,964 INFO [RS:3;jenkins-hbase4:44287] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:46,964 INFO [RS:0;jenkins-hbase4:43741] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:46,964 INFO [RS:3;jenkins-hbase4:44287] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:46,964 INFO [RS:0;jenkins-hbase4:43741] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:46,964 INFO [RS:3;jenkins-hbase4:44287] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:46,964 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:46,964 INFO [RS:0;jenkins-hbase4:43741] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:46,964 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(3305): Received CLOSE for 701c50185fdc12fe0464bfa3b96e779c 2023-07-16 14:15:46,964 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(3305): Received CLOSE for 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:46,964 INFO [RS:1;jenkins-hbase4:34921] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:46,964 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:46,964 INFO [RS:1;jenkins-hbase4:34921] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:46,964 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:46,964 INFO [RS:1;jenkins-hbase4:34921] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:46,965 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:46,965 DEBUG [RS:1;jenkins-hbase4:34921] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4e304859 to 127.0.0.1:63627 2023-07-16 14:15:46,965 DEBUG [RS:1;jenkins-hbase4:34921] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:46,967 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(3305): Received CLOSE for 2aaf0ce709cf8e71a96440aaa2c8020d 2023-07-16 14:15:46,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 701c50185fdc12fe0464bfa3b96e779c, disabling compactions & flushes 2023-07-16 14:15:46,967 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:46,967 DEBUG [RS:0;jenkins-hbase4:43741] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x79b6f309 to 127.0.0.1:63627 2023-07-16 14:15:46,967 DEBUG [RS:0;jenkins-hbase4:43741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:46,967 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 14:15:46,967 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1478): Online Regions={30a8ae896fd23311b4a9b3f859e17ea6=testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6.} 2023-07-16 14:15:46,967 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(3305): Received CLOSE for bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:46,968 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:46,968 DEBUG [RS:3;jenkins-hbase4:44287] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x02024450 to 127.0.0.1:63627 2023-07-16 14:15:46,968 DEBUG [RS:3;jenkins-hbase4:44287] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:46,968 INFO [RS:3;jenkins-hbase4:44287] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:46,968 INFO [RS:3;jenkins-hbase4:44287] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:46,968 INFO [RS:3;jenkins-hbase4:44287] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:46,968 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 14:15:46,968 DEBUG [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1504): Waiting on 30a8ae896fd23311b4a9b3f859e17ea6 2023-07-16 14:15:46,967 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:46,967 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34921,1689516920700; all regions closed. 2023-07-16 14:15:46,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:46,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. after waiting 0 ms 2023-07-16 14:15:46,969 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:46,969 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 701c50185fdc12fe0464bfa3b96e779c 1/1 column families, dataSize=27.06 KB heapSize=44.68 KB 2023-07-16 14:15:46,975 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 30a8ae896fd23311b4a9b3f859e17ea6, disabling compactions & flushes 2023-07-16 14:15:46,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. after waiting 0 ms 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:46,976 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-16 14:15:46,976 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1478): Online Regions={701c50185fdc12fe0464bfa3b96e779c=hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c., 2aaf0ce709cf8e71a96440aaa2c8020d=unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d., bb99c7296a6419e19ffe990276a43f38=hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38., 1588230740=hbase:meta,,1.1588230740} 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:46,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:46,976 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1504): Waiting on 1588230740, 2aaf0ce709cf8e71a96440aaa2c8020d, 701c50185fdc12fe0464bfa3b96e779c, bb99c7296a6419e19ffe990276a43f38 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:46,976 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:46,976 INFO [RS:2;jenkins-hbase4:41933] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:46,976 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=77.76 KB heapSize=122.41 KB 2023-07-16 14:15:46,977 INFO [RS:2;jenkins-hbase4:41933] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:46,977 INFO [RS:2;jenkins-hbase4:41933] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:46,977 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:46,977 DEBUG [RS:2;jenkins-hbase4:41933] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3f90762c to 127.0.0.1:63627 2023-07-16 14:15:46,977 DEBUG [RS:2;jenkins-hbase4:41933] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:46,977 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41933,1689516920766; all regions closed. 2023-07-16 14:15:46,977 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:46,985 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:46,985 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:46,985 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:46,985 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:46,992 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/testRename/30a8ae896fd23311b4a9b3f859e17ea6/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 14:15:46,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:46,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 30a8ae896fd23311b4a9b3f859e17ea6: 2023-07-16 14:15:46,995 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689516940085.30a8ae896fd23311b4a9b3f859e17ea6. 2023-07-16 14:15:47,015 DEBUG [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs 2023-07-16 14:15:47,016 INFO [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34921%2C1689516920700.meta:.meta(num 1689516923440) 2023-07-16 14:15:47,023 DEBUG [RS:2;jenkins-hbase4:41933] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs 2023-07-16 14:15:47,023 INFO [RS:2;jenkins-hbase4:41933] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41933%2C1689516920766:(num 1689516923074) 2023-07-16 14:15:47,023 DEBUG [RS:2;jenkins-hbase4:41933] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:47,023 INFO [RS:2;jenkins-hbase4:41933] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:47,026 INFO [RS:2;jenkins-hbase4:41933] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:47,027 INFO [RS:2;jenkins-hbase4:41933] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:47,027 INFO [RS:2;jenkins-hbase4:41933] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:47,027 INFO [RS:2;jenkins-hbase4:41933] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:47,027 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:47,029 INFO [RS:2;jenkins-hbase4:41933] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41933 2023-07-16 14:15:47,039 DEBUG [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs 2023-07-16 14:15:47,040 INFO [RS:1;jenkins-hbase4:34921] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34921%2C1689516920700:(num 1689516923075) 2023-07-16 14:15:47,040 DEBUG [RS:1;jenkins-hbase4:34921] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:47,040 INFO [RS:1;jenkins-hbase4:34921] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:47,042 INFO [RS:1;jenkins-hbase4:34921] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:47,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.06 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/.tmp/m/c6d049deffdf49e380e1f87e47f50529 2023-07-16 14:15:47,043 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:47,043 INFO [RS:1;jenkins-hbase4:34921] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:47,044 INFO [RS:1;jenkins-hbase4:34921] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:47,044 INFO [RS:1;jenkins-hbase4:34921] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:47,045 INFO [RS:1;jenkins-hbase4:34921] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34921 2023-07-16 14:15:47,045 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:47,045 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,045 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:47,046 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,046 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,046 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:47,046 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,046 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41933,1689516920766 2023-07-16 14:15:47,046 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,046 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41933,1689516920766] 2023-07-16 14:15:47,046 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41933,1689516920766; numProcessing=1 2023-07-16 14:15:47,049 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41933,1689516920766 already deleted, retry=false 2023-07-16 14:15:47,049 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41933,1689516920766 expired; onlineServers=3 2023-07-16 14:15:47,050 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=71.95 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/info/6ea33a8a69f0414da9d96a8d732b889a 2023-07-16 14:15:47,050 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:47,050 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,051 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:47,050 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34921,1689516920700 2023-07-16 14:15:47,054 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34921,1689516920700] 2023-07-16 14:15:47,054 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34921,1689516920700; numProcessing=2 2023-07-16 14:15:47,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6d049deffdf49e380e1f87e47f50529 2023-07-16 14:15:47,056 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34921,1689516920700 already deleted, retry=false 2023-07-16 14:15:47,056 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34921,1689516920700 expired; onlineServers=2 2023-07-16 14:15:47,057 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/.tmp/m/c6d049deffdf49e380e1f87e47f50529 as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m/c6d049deffdf49e380e1f87e47f50529 2023-07-16 14:15:47,058 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ea33a8a69f0414da9d96a8d732b889a 2023-07-16 14:15:47,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c6d049deffdf49e380e1f87e47f50529 2023-07-16 14:15:47,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/m/c6d049deffdf49e380e1f87e47f50529, entries=28, sequenceid=101, filesize=6.1 K 2023-07-16 14:15:47,073 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.06 KB/27714, heapSize ~44.66 KB/45736, currentSize=0 B/0 for 701c50185fdc12fe0464bfa3b96e779c in 104ms, sequenceid=101, compaction requested=false 2023-07-16 14:15:47,077 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/rep_barrier/83307cb7b64e41adad5cf4ec6e9a9fef 2023-07-16 14:15:47,083 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/rsgroup/701c50185fdc12fe0464bfa3b96e779c/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-16 14:15:47,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:47,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 701c50185fdc12fe0464bfa3b96e779c: 2023-07-16 14:15:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689516923630.701c50185fdc12fe0464bfa3b96e779c. 2023-07-16 14:15:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2aaf0ce709cf8e71a96440aaa2c8020d, disabling compactions & flushes 2023-07-16 14:15:47,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. after waiting 0 ms 2023-07-16 14:15:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:47,087 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83307cb7b64e41adad5cf4ec6e9a9fef 2023-07-16 14:15:47,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/default/unmovedTable/2aaf0ce709cf8e71a96440aaa2c8020d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 14:15:47,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:47,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2aaf0ce709cf8e71a96440aaa2c8020d: 2023-07-16 14:15:47,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689516941753.2aaf0ce709cf8e71a96440aaa2c8020d. 2023-07-16 14:15:47,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bb99c7296a6419e19ffe990276a43f38, disabling compactions & flushes 2023-07-16 14:15:47,107 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:47,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:47,107 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. after waiting 0 ms 2023-07-16 14:15:47,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:47,144 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=214 (bloomFilter=false), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/table/a16d03c372054fbeb5047f4d39acf999 2023-07-16 14:15:47,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/namespace/bb99c7296a6419e19ffe990276a43f38/recovered.edits/15.seqid, newMaxSeqId=15, maxSeqId=12 2023-07-16 14:15:47,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:47,147 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bb99c7296a6419e19ffe990276a43f38: 2023-07-16 14:15:47,148 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689516923739.bb99c7296a6419e19ffe990276a43f38. 2023-07-16 14:15:47,153 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a16d03c372054fbeb5047f4d39acf999 2023-07-16 14:15:47,154 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/info/6ea33a8a69f0414da9d96a8d732b889a as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info/6ea33a8a69f0414da9d96a8d732b889a 2023-07-16 14:15:47,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6ea33a8a69f0414da9d96a8d732b889a 2023-07-16 14:15:47,162 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/info/6ea33a8a69f0414da9d96a8d732b889a, entries=97, sequenceid=214, filesize=16.0 K 2023-07-16 14:15:47,163 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/rep_barrier/83307cb7b64e41adad5cf4ec6e9a9fef as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier/83307cb7b64e41adad5cf4ec6e9a9fef 2023-07-16 14:15:47,168 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43741,1689516920562; all regions closed. 2023-07-16 14:15:47,177 DEBUG [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-16 14:15:47,180 DEBUG [RS:0;jenkins-hbase4:43741] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs 2023-07-16 14:15:47,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 83307cb7b64e41adad5cf4ec6e9a9fef 2023-07-16 14:15:47,180 INFO [RS:0;jenkins-hbase4:43741] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43741%2C1689516920562:(num 1689516923074) 2023-07-16 14:15:47,180 DEBUG [RS:0;jenkins-hbase4:43741] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:47,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/rep_barrier/83307cb7b64e41adad5cf4ec6e9a9fef, entries=18, sequenceid=214, filesize=6.9 K 2023-07-16 14:15:47,180 INFO [RS:0;jenkins-hbase4:43741] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:47,180 INFO [RS:0;jenkins-hbase4:43741] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:47,180 INFO [RS:0;jenkins-hbase4:43741] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:47,180 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:47,180 INFO [RS:0;jenkins-hbase4:43741] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:47,181 INFO [RS:0;jenkins-hbase4:43741] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:47,182 INFO [RS:0;jenkins-hbase4:43741] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43741 2023-07-16 14:15:47,182 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/.tmp/table/a16d03c372054fbeb5047f4d39acf999 as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table/a16d03c372054fbeb5047f4d39acf999 2023-07-16 14:15:47,183 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:47,183 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,183 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43741,1689516920562 2023-07-16 14:15:47,185 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43741,1689516920562] 2023-07-16 14:15:47,185 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43741,1689516920562; numProcessing=3 2023-07-16 14:15:47,187 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43741,1689516920562 already deleted, retry=false 2023-07-16 14:15:47,187 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43741,1689516920562 expired; onlineServers=1 2023-07-16 14:15:47,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a16d03c372054fbeb5047f4d39acf999 2023-07-16 14:15:47,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/table/a16d03c372054fbeb5047f4d39acf999, entries=27, sequenceid=214, filesize=7.2 K 2023-07-16 14:15:47,190 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~77.76 KB/79623, heapSize ~122.36 KB/125296, currentSize=0 B/0 for 1588230740 in 214ms, sequenceid=214, compaction requested=false 2023-07-16 14:15:47,208 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/data/hbase/meta/1588230740/recovered.edits/217.seqid, newMaxSeqId=217, maxSeqId=19 2023-07-16 14:15:47,209 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:47,210 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:47,210 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:47,210 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:47,377 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44287,1689516924704; all regions closed. 2023-07-16 14:15:47,384 DEBUG [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs 2023-07-16 14:15:47,384 INFO [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44287%2C1689516924704.meta:.meta(num 1689516925759) 2023-07-16 14:15:47,389 DEBUG [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/oldWALs 2023-07-16 14:15:47,389 INFO [RS:3;jenkins-hbase4:44287] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44287%2C1689516924704:(num 1689516925023) 2023-07-16 14:15:47,389 DEBUG [RS:3;jenkins-hbase4:44287] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:47,389 INFO [RS:3;jenkins-hbase4:44287] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:47,390 INFO [RS:3;jenkins-hbase4:44287] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:47,390 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:47,391 INFO [RS:3;jenkins-hbase4:44287] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44287 2023-07-16 14:15:47,392 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44287,1689516924704 2023-07-16 14:15:47,392 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:47,393 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44287,1689516924704] 2023-07-16 14:15:47,393 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44287,1689516924704; numProcessing=4 2023-07-16 14:15:47,394 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44287,1689516924704 already deleted, retry=false 2023-07-16 14:15:47,394 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44287,1689516924704 expired; onlineServers=0 2023-07-16 14:15:47,394 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41971,1689516918385' ***** 2023-07-16 14:15:47,394 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 14:15:47,395 DEBUG [M:0;jenkins-hbase4:41971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26e4b48d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:47,395 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:47,397 INFO [M:0;jenkins-hbase4:41971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2494f2d0{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 14:15:47,398 INFO [M:0;jenkins-hbase4:41971] server.AbstractConnector(383): Stopped ServerConnector@4811724e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:47,398 INFO [M:0;jenkins-hbase4:41971] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:47,398 INFO [M:0;jenkins-hbase4:41971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2e144596{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:47,399 INFO [M:0;jenkins-hbase4:41971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@247d4b40{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:47,399 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:47,399 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:47,399 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41971,1689516918385 2023-07-16 14:15:47,399 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41971,1689516918385; all regions closed. 2023-07-16 14:15:47,399 DEBUG [M:0;jenkins-hbase4:41971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:47,399 INFO [M:0;jenkins-hbase4:41971] master.HMaster(1491): Stopping master jetty server 2023-07-16 14:15:47,399 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:47,400 INFO [M:0;jenkins-hbase4:41971] server.AbstractConnector(383): Stopped ServerConnector@54097cdd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:47,400 DEBUG [M:0;jenkins-hbase4:41971] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 14:15:47,400 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 14:15:47,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516922526] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516922526,5,FailOnTimeoutGroup] 2023-07-16 14:15:47,400 DEBUG [M:0;jenkins-hbase4:41971] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 14:15:47,400 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516922529] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516922529,5,FailOnTimeoutGroup] 2023-07-16 14:15:47,401 INFO [M:0;jenkins-hbase4:41971] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 14:15:47,401 INFO [M:0;jenkins-hbase4:41971] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 14:15:47,401 INFO [M:0;jenkins-hbase4:41971] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 14:15:47,401 DEBUG [M:0;jenkins-hbase4:41971] master.HMaster(1512): Stopping service threads 2023-07-16 14:15:47,401 INFO [M:0;jenkins-hbase4:41971] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 14:15:47,401 ERROR [M:0;jenkins-hbase4:41971] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-16 14:15:47,402 INFO [M:0;jenkins-hbase4:41971] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 14:15:47,402 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 14:15:47,402 DEBUG [M:0;jenkins-hbase4:41971] zookeeper.ZKUtil(398): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 14:15:47,402 WARN [M:0;jenkins-hbase4:41971] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 14:15:47,402 INFO [M:0;jenkins-hbase4:41971] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 14:15:47,402 INFO [M:0;jenkins-hbase4:41971] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 14:15:47,403 DEBUG [M:0;jenkins-hbase4:41971] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 14:15:47,403 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:47,403 DEBUG [M:0;jenkins-hbase4:41971] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:47,403 DEBUG [M:0;jenkins-hbase4:41971] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 14:15:47,403 DEBUG [M:0;jenkins-hbase4:41971] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:47,403 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.75 KB heapSize=632.89 KB 2023-07-16 14:15:47,416 INFO [M:0;jenkins-hbase4:41971] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.75 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/160b4187e5da413a8eed75759109459b 2023-07-16 14:15:47,423 DEBUG [M:0;jenkins-hbase4:41971] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/160b4187e5da413a8eed75759109459b as hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/160b4187e5da413a8eed75759109459b 2023-07-16 14:15:47,428 INFO [M:0;jenkins-hbase4:41971] regionserver.HStore(1080): Added hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/160b4187e5da413a8eed75759109459b, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-16 14:15:47,429 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegion(2948): Finished flush of dataSize ~528.75 KB/541437, heapSize ~632.88 KB/648064, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 26ms, sequenceid=1176, compaction requested=false 2023-07-16 14:15:47,431 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:47,431 DEBUG [M:0;jenkins-hbase4:41971] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:47,436 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:47,436 INFO [M:0;jenkins-hbase4:41971] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 14:15:47,436 INFO [M:0;jenkins-hbase4:41971] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41971 2023-07-16 14:15:47,438 DEBUG [M:0;jenkins-hbase4:41971] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,41971,1689516918385 already deleted, retry=false 2023-07-16 14:15:47,639 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,639 INFO [M:0;jenkins-hbase4:41971] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41971,1689516918385; zookeeper connection closed. 2023-07-16 14:15:47,639 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): master:41971-0x1016e7cc5860000, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,739 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,739 INFO [RS:3;jenkins-hbase4:44287] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44287,1689516924704; zookeeper connection closed. 2023-07-16 14:15:47,739 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:44287-0x1016e7cc586000b, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,739 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c04a4c2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c04a4c2 2023-07-16 14:15:47,839 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,839 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:43741-0x1016e7cc5860001, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,839 INFO [RS:0;jenkins-hbase4:43741] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43741,1689516920562; zookeeper connection closed. 2023-07-16 14:15:47,839 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@595a34a1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@595a34a1 2023-07-16 14:15:47,939 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,939 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:34921-0x1016e7cc5860002, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:47,939 INFO [RS:1;jenkins-hbase4:34921] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34921,1689516920700; zookeeper connection closed. 2023-07-16 14:15:47,940 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5cc1b7a9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5cc1b7a9 2023-07-16 14:15:48,040 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:48,040 INFO [RS:2;jenkins-hbase4:41933] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41933,1689516920766; zookeeper connection closed. 2023-07-16 14:15:48,040 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): regionserver:41933-0x1016e7cc5860003, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:48,040 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5e76f472] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5e76f472 2023-07-16 14:15:48,040 INFO [Listener at localhost/36419] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 14:15:48,041 WARN [Listener at localhost/36419] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:48,045 INFO [Listener at localhost/36419] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:48,148 WARN [BP-90143098-172.31.14.131-1689516914396 heartbeating to localhost/127.0.0.1:42609] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:48,148 WARN [BP-90143098-172.31.14.131-1689516914396 heartbeating to localhost/127.0.0.1:42609] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-90143098-172.31.14.131-1689516914396 (Datanode Uuid 38781a8c-81fb-4e0c-8d51-66ba030550e2) service to localhost/127.0.0.1:42609 2023-07-16 14:15:48,150 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data5/current/BP-90143098-172.31.14.131-1689516914396] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:48,150 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data6/current/BP-90143098-172.31.14.131-1689516914396] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:48,152 WARN [Listener at localhost/36419] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:48,155 INFO [Listener at localhost/36419] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:48,260 WARN [BP-90143098-172.31.14.131-1689516914396 heartbeating to localhost/127.0.0.1:42609] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:48,260 WARN [BP-90143098-172.31.14.131-1689516914396 heartbeating to localhost/127.0.0.1:42609] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-90143098-172.31.14.131-1689516914396 (Datanode Uuid bad6bfd1-28e8-435d-86b8-123c65c90635) service to localhost/127.0.0.1:42609 2023-07-16 14:15:48,260 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data3/current/BP-90143098-172.31.14.131-1689516914396] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:48,261 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data4/current/BP-90143098-172.31.14.131-1689516914396] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:48,262 WARN [Listener at localhost/36419] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:48,264 INFO [Listener at localhost/36419] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:48,368 WARN [BP-90143098-172.31.14.131-1689516914396 heartbeating to localhost/127.0.0.1:42609] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:48,368 WARN [BP-90143098-172.31.14.131-1689516914396 heartbeating to localhost/127.0.0.1:42609] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-90143098-172.31.14.131-1689516914396 (Datanode Uuid 809c40bf-7c98-4302-803d-1582ef65a464) service to localhost/127.0.0.1:42609 2023-07-16 14:15:48,368 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data1/current/BP-90143098-172.31.14.131-1689516914396] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:48,369 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/cluster_3951fb1c-3077-9ccf-90be-3916c455ca75/dfs/data/data2/current/BP-90143098-172.31.14.131-1689516914396] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:48,400 INFO [Listener at localhost/36419] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:48,524 INFO [Listener at localhost/36419] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 14:15:48,576 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 14:15:48,576 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 14:15:48,576 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.log.dir so I do NOT create it in target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267 2023-07-16 14:15:48,576 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/6b65b661-b79c-8a82-f793-2b74c3badac6/hadoop.tmp.dir so I do NOT create it in target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267 2023-07-16 14:15:48,576 INFO [Listener at localhost/36419] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852, deleteOnExit=true 2023-07-16 14:15:48,576 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/test.cache.data in system properties and HBase conf 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir in system properties and HBase conf 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 14:15:48,577 DEBUG [Listener at localhost/36419] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 14:15:48,577 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 14:15:48,578 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 14:15:48,578 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 14:15:48,578 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 14:15:48,578 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 14:15:48,578 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 14:15:48,578 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 14:15:48,579 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 14:15:48,579 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 14:15:48,579 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/nfs.dump.dir in system properties and HBase conf 2023-07-16 14:15:48,579 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/java.io.tmpdir in system properties and HBase conf 2023-07-16 14:15:48,580 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 14:15:48,580 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 14:15:48,580 INFO [Listener at localhost/36419] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 14:15:48,585 WARN [Listener at localhost/36419] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 14:15:48,585 WARN [Listener at localhost/36419] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 14:15:48,603 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:48,603 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 14:15:48,603 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 14:15:48,620 DEBUG [Listener at localhost/36419-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016e7cc586000a, quorum=127.0.0.1:63627, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 14:15:48,620 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016e7cc586000a, quorum=127.0.0.1:63627, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 14:15:48,631 WARN [Listener at localhost/36419] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:48,634 INFO [Listener at localhost/36419] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:48,640 INFO [Listener at localhost/36419] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/java.io.tmpdir/Jetty_localhost_45425_hdfs____.6iflkw/webapp 2023-07-16 14:15:48,748 INFO [Listener at localhost/36419] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45425 2023-07-16 14:15:48,752 WARN [Listener at localhost/36419] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 14:15:48,753 WARN [Listener at localhost/36419] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 14:15:48,798 WARN [Listener at localhost/43571] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:48,813 WARN [Listener at localhost/43571] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:48,816 WARN [Listener at localhost/43571] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:48,817 INFO [Listener at localhost/43571] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:48,824 INFO [Listener at localhost/43571] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/java.io.tmpdir/Jetty_localhost_33965_datanode____.qo6pj5/webapp 2023-07-16 14:15:48,922 INFO [Listener at localhost/43571] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33965 2023-07-16 14:15:48,930 WARN [Listener at localhost/33839] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:48,946 WARN [Listener at localhost/33839] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:48,955 WARN [Listener at localhost/33839] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:48,956 INFO [Listener at localhost/33839] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:48,961 INFO [Listener at localhost/33839] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/java.io.tmpdir/Jetty_localhost_39915_datanode____zi5ly6/webapp 2023-07-16 14:15:49,061 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeed7e28bc1db9284: Processing first storage report for DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f from datanode 32f03ae5-da48-46c0-a229-c6545176d025 2023-07-16 14:15:49,062 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeed7e28bc1db9284: from storage DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f node DatanodeRegistration(127.0.0.1:42223, datanodeUuid=32f03ae5-da48-46c0-a229-c6545176d025, infoPort=35269, infoSecurePort=0, ipcPort=33839, storageInfo=lv=-57;cid=testClusterID;nsid=1083877790;c=1689516948589), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 14:15:49,062 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xeed7e28bc1db9284: Processing first storage report for DS-d0784273-3c40-4b8a-b483-b1c421298106 from datanode 32f03ae5-da48-46c0-a229-c6545176d025 2023-07-16 14:15:49,062 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xeed7e28bc1db9284: from storage DS-d0784273-3c40-4b8a-b483-b1c421298106 node DatanodeRegistration(127.0.0.1:42223, datanodeUuid=32f03ae5-da48-46c0-a229-c6545176d025, infoPort=35269, infoSecurePort=0, ipcPort=33839, storageInfo=lv=-57;cid=testClusterID;nsid=1083877790;c=1689516948589), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:49,080 INFO [Listener at localhost/33839] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39915 2023-07-16 14:15:49,089 WARN [Listener at localhost/34607] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:49,128 WARN [Listener at localhost/34607] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:49,131 WARN [Listener at localhost/34607] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:49,132 INFO [Listener at localhost/34607] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:49,140 INFO [Listener at localhost/34607] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/java.io.tmpdir/Jetty_localhost_44425_datanode____.wx3vs2/webapp 2023-07-16 14:15:49,204 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd528e4be7363b3bf: Processing first storage report for DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f from datanode 024ebe1f-3d61-4cbe-b799-05f6b06393a7 2023-07-16 14:15:49,205 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd528e4be7363b3bf: from storage DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f node DatanodeRegistration(127.0.0.1:39935, datanodeUuid=024ebe1f-3d61-4cbe-b799-05f6b06393a7, infoPort=46521, infoSecurePort=0, ipcPort=34607, storageInfo=lv=-57;cid=testClusterID;nsid=1083877790;c=1689516948589), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-16 14:15:49,205 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd528e4be7363b3bf: Processing first storage report for DS-6e5b0a8b-8cc8-463c-bb63-88bbbe1def67 from datanode 024ebe1f-3d61-4cbe-b799-05f6b06393a7 2023-07-16 14:15:49,205 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd528e4be7363b3bf: from storage DS-6e5b0a8b-8cc8-463c-bb63-88bbbe1def67 node DatanodeRegistration(127.0.0.1:39935, datanodeUuid=024ebe1f-3d61-4cbe-b799-05f6b06393a7, infoPort=46521, infoSecurePort=0, ipcPort=34607, storageInfo=lv=-57;cid=testClusterID;nsid=1083877790;c=1689516948589), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:49,251 INFO [Listener at localhost/34607] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44425 2023-07-16 14:15:49,266 WARN [Listener at localhost/33357] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:49,389 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4c63717ad5206ca6: Processing first storage report for DS-8b434933-7f14-46fc-a420-7cb8c09171dc from datanode 96d4de68-3ce3-4be5-9a00-28d08ffc636d 2023-07-16 14:15:49,390 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4c63717ad5206ca6: from storage DS-8b434933-7f14-46fc-a420-7cb8c09171dc node DatanodeRegistration(127.0.0.1:39355, datanodeUuid=96d4de68-3ce3-4be5-9a00-28d08ffc636d, infoPort=41717, infoSecurePort=0, ipcPort=33357, storageInfo=lv=-57;cid=testClusterID;nsid=1083877790;c=1689516948589), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:49,390 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x4c63717ad5206ca6: Processing first storage report for DS-b6450539-60ad-41d7-b607-e548b8ec4c92 from datanode 96d4de68-3ce3-4be5-9a00-28d08ffc636d 2023-07-16 14:15:49,390 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x4c63717ad5206ca6: from storage DS-b6450539-60ad-41d7-b607-e548b8ec4c92 node DatanodeRegistration(127.0.0.1:39355, datanodeUuid=96d4de68-3ce3-4be5-9a00-28d08ffc636d, infoPort=41717, infoSecurePort=0, ipcPort=33357, storageInfo=lv=-57;cid=testClusterID;nsid=1083877790;c=1689516948589), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:49,483 DEBUG [Listener at localhost/33357] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267 2023-07-16 14:15:49,486 INFO [Listener at localhost/33357] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/zookeeper_0, clientPort=55919, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 14:15:49,490 INFO [Listener at localhost/33357] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=55919 2023-07-16 14:15:49,490 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,491 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,512 INFO [Listener at localhost/33357] util.FSUtils(471): Created version file at hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593 with version=8 2023-07-16 14:15:49,512 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/hbase-staging 2023-07-16 14:15:49,513 DEBUG [Listener at localhost/33357] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 14:15:49,513 DEBUG [Listener at localhost/33357] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 14:15:49,513 DEBUG [Listener at localhost/33357] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 14:15:49,513 DEBUG [Listener at localhost/33357] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 14:15:49,514 INFO [Listener at localhost/33357] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:49,515 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,515 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,515 INFO [Listener at localhost/33357] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:49,515 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,515 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:49,515 INFO [Listener at localhost/33357] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:49,516 INFO [Listener at localhost/33357] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40717 2023-07-16 14:15:49,516 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,517 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,519 INFO [Listener at localhost/33357] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40717 connecting to ZooKeeper ensemble=127.0.0.1:55919 2023-07-16 14:15:49,528 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:407170x0, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:49,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40717-0x1016e7d42f00000 connected 2023-07-16 14:15:49,545 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:49,545 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:49,546 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:49,547 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40717 2023-07-16 14:15:49,547 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40717 2023-07-16 14:15:49,547 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40717 2023-07-16 14:15:49,548 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40717 2023-07-16 14:15:49,548 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40717 2023-07-16 14:15:49,551 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:49,551 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:49,551 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:49,552 INFO [Listener at localhost/33357] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 14:15:49,552 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:49,552 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:49,552 INFO [Listener at localhost/33357] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:49,553 INFO [Listener at localhost/33357] http.HttpServer(1146): Jetty bound to port 37767 2023-07-16 14:15:49,553 INFO [Listener at localhost/33357] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:49,555 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,556 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33eaf445{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:49,556 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,557 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6eca1326{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:49,566 INFO [Listener at localhost/33357] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:49,567 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:49,568 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:49,568 INFO [Listener at localhost/33357] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 14:15:49,570 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,571 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b9bbe66{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 14:15:49,573 INFO [Listener at localhost/33357] server.AbstractConnector(333): Started ServerConnector@38d09a26{HTTP/1.1, (http/1.1)}{0.0.0.0:37767} 2023-07-16 14:15:49,573 INFO [Listener at localhost/33357] server.Server(415): Started @37162ms 2023-07-16 14:15:49,573 INFO [Listener at localhost/33357] master.HMaster(444): hbase.rootdir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593, hbase.cluster.distributed=false 2023-07-16 14:15:49,591 INFO [Listener at localhost/33357] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:49,591 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,592 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,592 INFO [Listener at localhost/33357] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:49,592 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,592 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:49,592 INFO [Listener at localhost/33357] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:49,595 INFO [Listener at localhost/33357] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39377 2023-07-16 14:15:49,595 INFO [Listener at localhost/33357] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:49,601 DEBUG [Listener at localhost/33357] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:49,602 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,604 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,605 INFO [Listener at localhost/33357] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39377 connecting to ZooKeeper ensemble=127.0.0.1:55919 2023-07-16 14:15:49,614 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:393770x0, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:49,616 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39377-0x1016e7d42f00001 connected 2023-07-16 14:15:49,617 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:49,618 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:49,618 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:49,631 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39377 2023-07-16 14:15:49,634 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39377 2023-07-16 14:15:49,637 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39377 2023-07-16 14:15:49,638 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39377 2023-07-16 14:15:49,638 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39377 2023-07-16 14:15:49,641 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:49,641 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:49,641 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:49,642 INFO [Listener at localhost/33357] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:49,642 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:49,643 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:49,643 INFO [Listener at localhost/33357] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:49,644 INFO [Listener at localhost/33357] http.HttpServer(1146): Jetty bound to port 34313 2023-07-16 14:15:49,644 INFO [Listener at localhost/33357] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:49,657 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,657 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6495e970{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:49,658 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,658 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e0a0d01{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:49,666 INFO [Listener at localhost/33357] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:49,666 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:49,666 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:49,667 INFO [Listener at localhost/33357] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:49,668 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,668 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6e29eae4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:49,670 INFO [Listener at localhost/33357] server.AbstractConnector(333): Started ServerConnector@673e454{HTTP/1.1, (http/1.1)}{0.0.0.0:34313} 2023-07-16 14:15:49,670 INFO [Listener at localhost/33357] server.Server(415): Started @37259ms 2023-07-16 14:15:49,681 INFO [Listener at localhost/33357] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:49,681 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,682 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,682 INFO [Listener at localhost/33357] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:49,682 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,682 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:49,682 INFO [Listener at localhost/33357] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:49,683 INFO [Listener at localhost/33357] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45275 2023-07-16 14:15:49,683 INFO [Listener at localhost/33357] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:49,683 DEBUG [Listener at localhost/33357] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:49,684 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,685 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,687 INFO [Listener at localhost/33357] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45275 connecting to ZooKeeper ensemble=127.0.0.1:55919 2023-07-16 14:15:49,694 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:452750x0, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:49,695 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:452750x0, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:49,696 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:452750x0, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:49,697 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:452750x0, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:49,698 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45275-0x1016e7d42f00002 connected 2023-07-16 14:15:49,699 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45275 2023-07-16 14:15:49,699 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45275 2023-07-16 14:15:49,699 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45275 2023-07-16 14:15:49,700 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45275 2023-07-16 14:15:49,700 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45275 2023-07-16 14:15:49,702 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:49,703 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:49,703 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:49,704 INFO [Listener at localhost/33357] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:49,704 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:49,704 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:49,704 INFO [Listener at localhost/33357] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:49,704 INFO [Listener at localhost/33357] http.HttpServer(1146): Jetty bound to port 35259 2023-07-16 14:15:49,705 INFO [Listener at localhost/33357] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:49,712 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,712 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@30ac444b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:49,713 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,713 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6de35580{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:49,719 INFO [Listener at localhost/33357] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:49,720 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:49,720 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:49,720 INFO [Listener at localhost/33357] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 14:15:49,721 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,721 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5a13fd87{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:49,723 INFO [Listener at localhost/33357] server.AbstractConnector(333): Started ServerConnector@6cb6fe1f{HTTP/1.1, (http/1.1)}{0.0.0.0:35259} 2023-07-16 14:15:49,723 INFO [Listener at localhost/33357] server.Server(415): Started @37313ms 2023-07-16 14:15:49,734 INFO [Listener at localhost/33357] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:49,735 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,735 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,735 INFO [Listener at localhost/33357] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:49,735 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:49,735 INFO [Listener at localhost/33357] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:49,735 INFO [Listener at localhost/33357] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:49,736 INFO [Listener at localhost/33357] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41339 2023-07-16 14:15:49,736 INFO [Listener at localhost/33357] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:49,738 DEBUG [Listener at localhost/33357] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:49,738 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,739 INFO [Listener at localhost/33357] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,740 INFO [Listener at localhost/33357] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41339 connecting to ZooKeeper ensemble=127.0.0.1:55919 2023-07-16 14:15:49,746 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:413390x0, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:49,747 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:413390x0, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:49,748 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41339-0x1016e7d42f00003 connected 2023-07-16 14:15:49,748 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:49,748 DEBUG [Listener at localhost/33357] zookeeper.ZKUtil(164): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:49,749 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41339 2023-07-16 14:15:49,749 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41339 2023-07-16 14:15:49,751 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41339 2023-07-16 14:15:49,753 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41339 2023-07-16 14:15:49,753 DEBUG [Listener at localhost/33357] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41339 2023-07-16 14:15:49,755 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:49,755 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:49,755 INFO [Listener at localhost/33357] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:49,755 INFO [Listener at localhost/33357] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:49,755 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:49,756 INFO [Listener at localhost/33357] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:49,756 INFO [Listener at localhost/33357] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:49,756 INFO [Listener at localhost/33357] http.HttpServer(1146): Jetty bound to port 42697 2023-07-16 14:15:49,756 INFO [Listener at localhost/33357] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:49,759 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,759 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6eb4e686{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:49,759 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,759 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4b95847b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:49,764 INFO [Listener at localhost/33357] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:49,765 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:49,765 INFO [Listener at localhost/33357] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:49,765 INFO [Listener at localhost/33357] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:49,766 INFO [Listener at localhost/33357] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:49,767 INFO [Listener at localhost/33357] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@61814198{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:49,769 INFO [Listener at localhost/33357] server.AbstractConnector(333): Started ServerConnector@50406226{HTTP/1.1, (http/1.1)}{0.0.0.0:42697} 2023-07-16 14:15:49,769 INFO [Listener at localhost/33357] server.Server(415): Started @37359ms 2023-07-16 14:15:49,773 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:49,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@7af59d75{HTTP/1.1, (http/1.1)}{0.0.0.0:36901} 2023-07-16 14:15:49,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37373ms 2023-07-16 14:15:49,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,785 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 14:15:49,785 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,787 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:49,787 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:49,787 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:49,787 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:49,788 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:49,789 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:49,790 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:49,790 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40717,1689516949514 from backup master directory 2023-07-16 14:15:49,791 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,791 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 14:15:49,791 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,818 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/hbase.id with ID: bb7f0360-d2ff-4b4b-950a-615b64476203 2023-07-16 14:15:49,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:49,837 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:49,855 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4ab738be to 127.0.0.1:55919 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:49,859 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6474288, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:49,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:49,860 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 14:15:49,861 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:49,862 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store-tmp 2023-07-16 14:15:49,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:49,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 14:15:49,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:49,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:49,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 14:15:49,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:49,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:49,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:49,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/WALs/jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40717%2C1689516949514, suffix=, logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/WALs/jenkins-hbase4.apache.org,40717,1689516949514, archiveDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/oldWALs, maxLogs=10 2023-07-16 14:15:49,902 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK] 2023-07-16 14:15:49,903 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK] 2023-07-16 14:15:49,903 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK] 2023-07-16 14:15:49,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/WALs/jenkins-hbase4.apache.org,40717,1689516949514/jenkins-hbase4.apache.org%2C40717%2C1689516949514.1689516949880 2023-07-16 14:15:49,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK], DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK], DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK]] 2023-07-16 14:15:49,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:49,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:49,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:49,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:49,916 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:49,917 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 14:15:49,918 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 14:15:49,918 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:49,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:49,919 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:49,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:49,924 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:49,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11486288800, jitterRate=0.06974400579929352}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:49,925 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:49,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 14:15:49,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 14:15:49,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 14:15:49,926 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 14:15:49,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 14:15:49,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 14:15:49,927 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 14:15:49,928 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 14:15:49,929 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 14:15:49,930 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 14:15:49,930 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 14:15:49,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 14:15:49,933 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:49,933 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 14:15:49,934 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 14:15:49,935 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 14:15:49,936 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:49,936 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:49,936 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:49,936 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:49,936 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:49,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40717,1689516949514, sessionid=0x1016e7d42f00000, setting cluster-up flag (Was=false) 2023-07-16 14:15:49,943 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:49,947 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 14:15:49,948 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,951 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:49,956 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 14:15:49,957 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:49,958 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.hbase-snapshot/.tmp 2023-07-16 14:15:49,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 14:15:49,961 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 14:15:49,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 14:15:49,967 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:49,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 14:15:49,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-16 14:15:49,969 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:49,979 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(951): ClusterId : bb7f0360-d2ff-4b4b-950a-615b64476203 2023-07-16 14:15:49,979 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(951): ClusterId : bb7f0360-d2ff-4b4b-950a-615b64476203 2023-07-16 14:15:49,981 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:49,981 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:49,982 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(951): ClusterId : bb7f0360-d2ff-4b4b-950a-615b64476203 2023-07-16 14:15:49,983 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:49,987 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:49,987 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:49,987 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:49,987 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:49,987 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:49,987 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:49,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 14:15:49,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 14:15:49,989 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:49,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 14:15:49,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 14:15:49,992 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ReadOnlyZKClient(139): Connect 0x0a0caadb to 127.0.0.1:55919 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:49,992 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:49,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:49,997 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ReadOnlyZKClient(139): Connect 0x1f02cc97 to 127.0.0.1:55919 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:49,997 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:50,004 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ReadOnlyZKClient(139): Connect 0x29b0d739 to 127.0.0.1:55919 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:50,010 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689516980003 2023-07-16 14:15:50,011 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:50,011 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 14:15:50,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 14:15:50,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 14:15:50,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 14:15:50,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 14:15:50,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 14:15:50,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 14:15:50,015 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:50,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,024 DEBUG [RS:0;jenkins-hbase4:39377] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4cb45ea, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:50,024 DEBUG [RS:0;jenkins-hbase4:39377] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@f4c1ab, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:50,027 DEBUG [RS:1;jenkins-hbase4:45275] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2348bc49, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:50,027 DEBUG [RS:1;jenkins-hbase4:45275] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@664a3644, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:50,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 14:15:50,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 14:15:50,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 14:15:50,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 14:15:50,030 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 14:15:50,032 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516950031,5,FailOnTimeoutGroup] 2023-07-16 14:15:50,032 DEBUG [RS:2;jenkins-hbase4:41339] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11c53107, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:50,032 DEBUG [RS:2;jenkins-hbase4:41339] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15faa7e7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:50,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516950032,5,FailOnTimeoutGroup] 2023-07-16 14:15:50,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 14:15:50,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,035 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,035 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39377 2023-07-16 14:15:50,035 INFO [RS:0;jenkins-hbase4:39377] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:50,035 INFO [RS:0;jenkins-hbase4:39377] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:50,035 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:50,036 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40717,1689516949514 with isa=jenkins-hbase4.apache.org/172.31.14.131:39377, startcode=1689516949591 2023-07-16 14:15:50,036 DEBUG [RS:0;jenkins-hbase4:39377] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:50,038 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:45275 2023-07-16 14:15:50,038 INFO [RS:1;jenkins-hbase4:45275] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:50,038 INFO [RS:1;jenkins-hbase4:45275] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:50,038 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:50,039 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40717,1689516949514 with isa=jenkins-hbase4.apache.org/172.31.14.131:45275, startcode=1689516949681 2023-07-16 14:15:50,039 DEBUG [RS:1;jenkins-hbase4:45275] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:50,039 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32829, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:50,041 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40717] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,042 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:50,042 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593 2023-07-16 14:15:50,042 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43571 2023-07-16 14:15:50,042 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37767 2023-07-16 14:15:50,044 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:50,044 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ZKUtil(162): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,044 WARN [RS:0;jenkins-hbase4:39377] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:50,044 INFO [RS:0;jenkins-hbase4:39377] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:50,044 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,046 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41339 2023-07-16 14:15:50,046 INFO [RS:2;jenkins-hbase4:41339] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:50,046 INFO [RS:2;jenkins-hbase4:41339] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:50,046 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:50,047 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40717,1689516949514 with isa=jenkins-hbase4.apache.org/172.31.14.131:41339, startcode=1689516949734 2023-07-16 14:15:50,047 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41551, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:50,047 DEBUG [RS:2;jenkins-hbase4:41339] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:50,047 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40717] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,048 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593 2023-07-16 14:15:50,048 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43571 2023-07-16 14:15:50,048 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37767 2023-07-16 14:15:50,049 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34409, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:50,049 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40717] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,049 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593 2023-07-16 14:15:50,049 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43571 2023-07-16 14:15:50,049 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37767 2023-07-16 14:15:50,054 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 14:15:50,054 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:50,054 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 14:15:50,062 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ZKUtil(162): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,062 WARN [RS:1;jenkins-hbase4:45275] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:50,062 INFO [RS:1;jenkins-hbase4:45275] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:50,062 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,066 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ZKUtil(162): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,066 WARN [RS:2;jenkins-hbase4:41339] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:50,066 INFO [RS:2;jenkins-hbase4:41339] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:50,066 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39377,1689516949591] 2023-07-16 14:15:50,066 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41339,1689516949734] 2023-07-16 14:15:50,066 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45275,1689516949681] 2023-07-16 14:15:50,066 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,075 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ZKUtil(162): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,077 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ZKUtil(162): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,079 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ZKUtil(162): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,080 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ZKUtil(162): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,081 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ZKUtil(162): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,081 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ZKUtil(162): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,081 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ZKUtil(162): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,082 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ZKUtil(162): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,082 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:50,082 INFO [RS:0;jenkins-hbase4:39377] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:50,082 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ZKUtil(162): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,082 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:50,083 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:50,083 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593 2023-07-16 14:15:50,084 INFO [RS:0;jenkins-hbase4:39377] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:50,085 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:50,085 INFO [RS:1;jenkins-hbase4:45275] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:50,086 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:50,087 INFO [RS:0;jenkins-hbase4:39377] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:50,087 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,088 INFO [RS:2;jenkins-hbase4:41339] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:50,089 INFO [RS:1;jenkins-hbase4:45275] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:50,091 INFO [RS:1;jenkins-hbase4:45275] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:50,091 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,094 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:50,094 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:50,098 INFO [RS:2;jenkins-hbase4:41339] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:50,098 INFO [RS:2;jenkins-hbase4:41339] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:50,098 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,098 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:50,100 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:50,100 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,101 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,101 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,101 DEBUG [RS:1;jenkins-hbase4:45275] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,102 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,103 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,104 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,104 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,104 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,104 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,105 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,105 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:50,105 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,105 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:50,106 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,106 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,106 DEBUG [RS:2;jenkins-hbase4:41339] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,106 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,106 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,106 DEBUG [RS:0;jenkins-hbase4:39377] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:50,117 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,117 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,117 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,117 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,117 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-16 14:15:50,119 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,119 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,119 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,119 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,126 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:50,128 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:50,135 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/info 2023-07-16 14:15:50,135 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:50,136 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,137 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:50,139 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:50,139 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:50,140 INFO [RS:1;jenkins-hbase4:45275] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:50,140 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45275,1689516949681-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,140 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:50,142 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/table 2023-07-16 14:15:50,142 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:50,143 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,147 INFO [RS:2;jenkins-hbase4:41339] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:50,147 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41339,1689516949734-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,147 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740 2023-07-16 14:15:50,147 INFO [RS:0;jenkins-hbase4:39377] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:50,147 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39377,1689516949591-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,148 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740 2023-07-16 14:15:50,153 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:50,163 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:50,180 INFO [RS:1;jenkins-hbase4:45275] regionserver.Replication(203): jenkins-hbase4.apache.org,45275,1689516949681 started 2023-07-16 14:15:50,180 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45275,1689516949681, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45275, sessionid=0x1016e7d42f00002 2023-07-16 14:15:50,180 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:50,180 DEBUG [RS:1;jenkins-hbase4:45275] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,180 DEBUG [RS:1;jenkins-hbase4:45275] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45275,1689516949681' 2023-07-16 14:15:50,180 DEBUG [RS:1;jenkins-hbase4:45275] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:50,183 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:50,183 DEBUG [RS:1;jenkins-hbase4:45275] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:50,184 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11685986720, jitterRate=0.0883423238992691}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:50,184 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:50,184 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:50,184 INFO [RS:2;jenkins-hbase4:41339] regionserver.Replication(203): jenkins-hbase4.apache.org,41339,1689516949734 started 2023-07-16 14:15:50,184 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41339,1689516949734, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41339, sessionid=0x1016e7d42f00003 2023-07-16 14:15:50,184 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:50,184 INFO [RS:0;jenkins-hbase4:39377] regionserver.Replication(203): jenkins-hbase4.apache.org,39377,1689516949591 started 2023-07-16 14:15:50,184 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:50,190 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39377,1689516949591, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39377, sessionid=0x1016e7d42f00001 2023-07-16 14:15:50,190 DEBUG [RS:1;jenkins-hbase4:45275] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,187 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:50,191 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:50,191 DEBUG [RS:1;jenkins-hbase4:45275] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45275,1689516949681' 2023-07-16 14:15:50,191 DEBUG [RS:1;jenkins-hbase4:45275] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:50,190 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:50,191 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:50,192 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:50,191 DEBUG [RS:0;jenkins-hbase4:39377] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,191 DEBUG [RS:2;jenkins-hbase4:41339] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,192 DEBUG [RS:0;jenkins-hbase4:39377] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39377,1689516949591' 2023-07-16 14:15:50,192 DEBUG [RS:0;jenkins-hbase4:39377] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:50,192 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:50,192 DEBUG [RS:2;jenkins-hbase4:41339] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41339,1689516949734' 2023-07-16 14:15:50,192 DEBUG [RS:2;jenkins-hbase4:41339] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:50,192 DEBUG [RS:1;jenkins-hbase4:45275] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:50,193 DEBUG [RS:0;jenkins-hbase4:39377] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:50,193 DEBUG [RS:2;jenkins-hbase4:41339] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:50,193 DEBUG [RS:1;jenkins-hbase4:45275] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:50,193 INFO [RS:1;jenkins-hbase4:45275] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 14:15:50,193 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:50,193 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:50,193 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:50,193 DEBUG [RS:0;jenkins-hbase4:39377] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,193 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:50,193 DEBUG [RS:0;jenkins-hbase4:39377] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39377,1689516949591' 2023-07-16 14:15:50,193 DEBUG [RS:0;jenkins-hbase4:39377] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:50,193 DEBUG [RS:2;jenkins-hbase4:41339] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,194 DEBUG [RS:2;jenkins-hbase4:41339] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41339,1689516949734' 2023-07-16 14:15:50,194 DEBUG [RS:2;jenkins-hbase4:41339] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:50,194 DEBUG [RS:2;jenkins-hbase4:41339] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:50,194 DEBUG [RS:0;jenkins-hbase4:39377] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:50,194 DEBUG [RS:2;jenkins-hbase4:41339] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:50,194 INFO [RS:2;jenkins-hbase4:41339] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 14:15:50,195 DEBUG [RS:0;jenkins-hbase4:39377] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:50,195 INFO [RS:0;jenkins-hbase4:39377] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 14:15:50,196 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,196 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,196 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,196 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ZKUtil(398): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 14:15:50,196 INFO [RS:0;jenkins-hbase4:39377] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 14:15:50,196 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ZKUtil(398): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 14:15:50,197 INFO [RS:1;jenkins-hbase4:45275] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 14:15:50,197 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,197 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,197 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,197 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ZKUtil(398): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 14:15:50,197 INFO [RS:2;jenkins-hbase4:41339] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 14:15:50,197 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,198 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,197 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,204 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:50,204 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:50,206 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:50,206 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 14:15:50,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 14:15:50,208 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 14:15:50,210 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 14:15:50,301 INFO [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39377%2C1689516949591, suffix=, logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,39377,1689516949591, archiveDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs, maxLogs=32 2023-07-16 14:15:50,301 INFO [RS:1;jenkins-hbase4:45275] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45275%2C1689516949681, suffix=, logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,45275,1689516949681, archiveDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs, maxLogs=32 2023-07-16 14:15:50,301 INFO [RS:2;jenkins-hbase4:41339] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41339%2C1689516949734, suffix=, logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,41339,1689516949734, archiveDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs, maxLogs=32 2023-07-16 14:15:50,329 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK] 2023-07-16 14:15:50,331 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK] 2023-07-16 14:15:50,331 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK] 2023-07-16 14:15:50,331 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK] 2023-07-16 14:15:50,332 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK] 2023-07-16 14:15:50,332 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK] 2023-07-16 14:15:50,332 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK] 2023-07-16 14:15:50,332 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK] 2023-07-16 14:15:50,333 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK] 2023-07-16 14:15:50,338 INFO [RS:2;jenkins-hbase4:41339] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,41339,1689516949734/jenkins-hbase4.apache.org%2C41339%2C1689516949734.1689516950312 2023-07-16 14:15:50,338 INFO [RS:1;jenkins-hbase4:45275] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,45275,1689516949681/jenkins-hbase4.apache.org%2C45275%2C1689516949681.1689516950312 2023-07-16 14:15:50,339 INFO [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,39377,1689516949591/jenkins-hbase4.apache.org%2C39377%2C1689516949591.1689516950311 2023-07-16 14:15:50,342 DEBUG [RS:2;jenkins-hbase4:41339] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK], DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK]] 2023-07-16 14:15:50,342 DEBUG [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK], DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK]] 2023-07-16 14:15:50,342 DEBUG [RS:1;jenkins-hbase4:45275] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK], DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK]] 2023-07-16 14:15:50,360 DEBUG [jenkins-hbase4:40717] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 14:15:50,360 DEBUG [jenkins-hbase4:40717] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:50,360 DEBUG [jenkins-hbase4:40717] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:50,360 DEBUG [jenkins-hbase4:40717] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:50,360 DEBUG [jenkins-hbase4:40717] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:50,360 DEBUG [jenkins-hbase4:40717] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:50,361 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39377,1689516949591, state=OPENING 2023-07-16 14:15:50,363 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 14:15:50,364 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:50,364 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39377,1689516949591}] 2023-07-16 14:15:50,365 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:50,520 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:50,520 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:50,522 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35734, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:50,527 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 14:15:50,527 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:50,529 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39377%2C1689516949591.meta, suffix=.meta, logDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,39377,1689516949591, archiveDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs, maxLogs=32 2023-07-16 14:15:50,546 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK] 2023-07-16 14:15:50,546 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK] 2023-07-16 14:15:50,547 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK] 2023-07-16 14:15:50,550 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/WALs/jenkins-hbase4.apache.org,39377,1689516949591/jenkins-hbase4.apache.org%2C39377%2C1689516949591.meta.1689516950530.meta 2023-07-16 14:15:50,550 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39355,DS-8b434933-7f14-46fc-a420-7cb8c09171dc,DISK], DatanodeInfoWithStorage[127.0.0.1:42223,DS-cb7ab8ad-469b-4dc6-86e2-4ed42c38ea5f,DISK], DatanodeInfoWithStorage[127.0.0.1:39935,DS-68279f7a-c9df-48e9-97ce-03e8c6d9659f,DISK]] 2023-07-16 14:15:50,550 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:50,550 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:50,550 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 14:15:50,551 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 14:15:50,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 14:15:50,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:50,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 14:15:50,551 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 14:15:50,552 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:50,553 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/info 2023-07-16 14:15:50,553 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/info 2023-07-16 14:15:50,553 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:50,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:50,555 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:50,555 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:50,555 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:50,556 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,556 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:50,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/table 2023-07-16 14:15:50,557 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/table 2023-07-16 14:15:50,557 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:50,558 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,558 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740 2023-07-16 14:15:50,559 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740 2023-07-16 14:15:50,561 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:50,562 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:50,563 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10385294240, jitterRate=-0.032794103026390076}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:50,563 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:50,563 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689516950519 2023-07-16 14:15:50,568 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 14:15:50,569 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 14:15:50,569 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39377,1689516949591, state=OPEN 2023-07-16 14:15:50,571 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 14:15:50,571 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:50,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 14:15:50,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39377,1689516949591 in 207 msec 2023-07-16 14:15:50,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 14:15:50,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 367 msec 2023-07-16 14:15:50,576 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 607 msec 2023-07-16 14:15:50,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689516950576, completionTime=-1 2023-07-16 14:15:50,576 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 14:15:50,576 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 14:15:50,580 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:50,581 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35748, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:50,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 14:15:50,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689517010583 2023-07-16 14:15:50,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689517070583 2023-07-16 14:15:50,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-07-16 14:15:50,583 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:50,585 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 14:15:50,586 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40717,1689516949514-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40717,1689516949514-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40717,1689516949514-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40717, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 14:15:50,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:50,590 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:50,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 14:15:50,591 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:50,592 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 14:15:50,592 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:50,593 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:50,593 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,594 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0 empty. 2023-07-16 14:15:50,594 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,594 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 14:15:50,594 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,595 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3 empty. 2023-07-16 14:15:50,595 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,595 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 14:15:50,612 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:50,614 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5a995ca3399f240e4fe310538087e5d0, NAME => 'hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp 2023-07-16 14:15:50,616 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:50,618 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => de8b969996e57525828002bb0b2d24b3, NAME => 'hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp 2023-07-16 14:15:50,627 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:50,627 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 5a995ca3399f240e4fe310538087e5d0, disabling compactions & flushes 2023-07-16 14:15:50,627 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,627 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,627 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. after waiting 0 ms 2023-07-16 14:15:50,627 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,627 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,627 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 5a995ca3399f240e4fe310538087e5d0: 2023-07-16 14:15:50,629 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:50,630 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516950630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516950630"}]},"ts":"1689516950630"} 2023-07-16 14:15:50,635 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:50,635 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing de8b969996e57525828002bb0b2d24b3, disabling compactions & flushes 2023-07-16 14:15:50,635 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,635 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,635 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. after waiting 0 ms 2023-07-16 14:15:50,635 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,635 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,635 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for de8b969996e57525828002bb0b2d24b3: 2023-07-16 14:15:50,637 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:50,638 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516950638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516950638"}]},"ts":"1689516950638"} 2023-07-16 14:15:50,639 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:50,639 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:50,640 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:50,640 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516950640"}]},"ts":"1689516950640"} 2023-07-16 14:15:50,640 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:50,640 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516950640"}]},"ts":"1689516950640"} 2023-07-16 14:15:50,641 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 14:15:50,642 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 14:15:50,646 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:50,646 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:50,646 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:50,646 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:50,646 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:50,646 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=5a995ca3399f240e4fe310538087e5d0, ASSIGN}] 2023-07-16 14:15:50,646 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:50,647 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:50,647 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:50,647 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:50,647 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:50,647 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=de8b969996e57525828002bb0b2d24b3, ASSIGN}] 2023-07-16 14:15:50,648 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=5a995ca3399f240e4fe310538087e5d0, ASSIGN 2023-07-16 14:15:50,649 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=5a995ca3399f240e4fe310538087e5d0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45275,1689516949681; forceNewPlan=false, retain=false 2023-07-16 14:15:50,651 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=de8b969996e57525828002bb0b2d24b3, ASSIGN 2023-07-16 14:15:50,652 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=de8b969996e57525828002bb0b2d24b3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41339,1689516949734; forceNewPlan=false, retain=false 2023-07-16 14:15:50,652 INFO [jenkins-hbase4:40717] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 14:15:50,654 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5a995ca3399f240e4fe310538087e5d0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,654 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=de8b969996e57525828002bb0b2d24b3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,654 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516950654"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516950654"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516950654"}]},"ts":"1689516950654"} 2023-07-16 14:15:50,654 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516950654"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516950654"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516950654"}]},"ts":"1689516950654"} 2023-07-16 14:15:50,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 5a995ca3399f240e4fe310538087e5d0, server=jenkins-hbase4.apache.org,45275,1689516949681}] 2023-07-16 14:15:50,656 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure de8b969996e57525828002bb0b2d24b3, server=jenkins-hbase4.apache.org,41339,1689516949734}] 2023-07-16 14:15:50,809 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,810 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,810 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:50,810 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:50,812 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33092, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:50,812 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38412, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:50,817 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,817 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5a995ca3399f240e4fe310538087e5d0, NAME => 'hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. service=MultiRowMutationService 2023-07-16 14:15:50,818 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,818 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de8b969996e57525828002bb0b2d24b3, NAME => 'hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,819 INFO [StoreOpener-5a995ca3399f240e4fe310538087e5d0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,820 INFO [StoreOpener-de8b969996e57525828002bb0b2d24b3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,821 DEBUG [StoreOpener-5a995ca3399f240e4fe310538087e5d0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/m 2023-07-16 14:15:50,821 DEBUG [StoreOpener-5a995ca3399f240e4fe310538087e5d0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/m 2023-07-16 14:15:50,821 DEBUG [StoreOpener-de8b969996e57525828002bb0b2d24b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/info 2023-07-16 14:15:50,821 DEBUG [StoreOpener-de8b969996e57525828002bb0b2d24b3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/info 2023-07-16 14:15:50,821 INFO [StoreOpener-5a995ca3399f240e4fe310538087e5d0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5a995ca3399f240e4fe310538087e5d0 columnFamilyName m 2023-07-16 14:15:50,821 INFO [StoreOpener-de8b969996e57525828002bb0b2d24b3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de8b969996e57525828002bb0b2d24b3 columnFamilyName info 2023-07-16 14:15:50,822 INFO [StoreOpener-5a995ca3399f240e4fe310538087e5d0-1] regionserver.HStore(310): Store=5a995ca3399f240e4fe310538087e5d0/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,822 INFO [StoreOpener-de8b969996e57525828002bb0b2d24b3-1] regionserver.HStore(310): Store=de8b969996e57525828002bb0b2d24b3/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:50,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:50,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:50,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:50,831 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5a995ca3399f240e4fe310538087e5d0; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@42ff35ed, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:50,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:50,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5a995ca3399f240e4fe310538087e5d0: 2023-07-16 14:15:50,831 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de8b969996e57525828002bb0b2d24b3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9411980960, jitterRate=-0.12344096601009369}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:50,831 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de8b969996e57525828002bb0b2d24b3: 2023-07-16 14:15:50,832 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0., pid=8, masterSystemTime=1689516950809 2023-07-16 14:15:50,834 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3., pid=9, masterSystemTime=1689516950810 2023-07-16 14:15:50,837 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,837 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:50,838 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=5a995ca3399f240e4fe310538087e5d0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:50,838 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,838 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516950838"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516950838"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516950838"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516950838"}]},"ts":"1689516950838"} 2023-07-16 14:15:50,838 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:50,839 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=de8b969996e57525828002bb0b2d24b3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:50,839 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516950839"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516950839"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516950839"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516950839"}]},"ts":"1689516950839"} 2023-07-16 14:15:50,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 14:15:50,842 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 5a995ca3399f240e4fe310538087e5d0, server=jenkins-hbase4.apache.org,45275,1689516949681 in 185 msec 2023-07-16 14:15:50,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 14:15:50,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure de8b969996e57525828002bb0b2d24b3, server=jenkins-hbase4.apache.org,41339,1689516949734 in 185 msec 2023-07-16 14:15:50,845 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-16 14:15:50,845 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=5a995ca3399f240e4fe310538087e5d0, ASSIGN in 196 msec 2023-07-16 14:15:50,846 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:50,846 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516950846"}]},"ts":"1689516950846"} 2023-07-16 14:15:50,847 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-16 14:15:50,847 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=de8b969996e57525828002bb0b2d24b3, ASSIGN in 197 msec 2023-07-16 14:15:50,847 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 14:15:50,848 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:50,848 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516950848"}]},"ts":"1689516950848"} 2023-07-16 14:15:50,850 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:50,850 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 14:15:50,851 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 267 msec 2023-07-16 14:15:50,853 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:50,855 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 263 msec 2023-07-16 14:15:50,891 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:50,892 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 14:15:50,893 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:50,893 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:50,895 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33104, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:50,900 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:50,901 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:50,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 14:15:50,904 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 14:15:50,904 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 14:15:50,911 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:50,915 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:50,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-16 14:15:50,915 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:50,917 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:50,919 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40717,1689516949514] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 14:15:50,925 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 14:15:50,933 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:50,937 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 12 msec 2023-07-16 14:15:50,950 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 14:15:50,954 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 14:15:50,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.163sec 2023-07-16 14:15:50,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-16 14:15:50,954 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:50,955 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-16 14:15:50,955 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-16 14:15:50,957 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:50,958 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:50,959 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-16 14:15:50,960 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:50,961 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0 empty. 2023-07-16 14:15:50,961 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:50,961 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-16 14:15:50,964 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-16 14:15:50,964 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-16 14:15:50,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:50,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 14:15:50,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 14:15:50,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40717,1689516949514-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 14:15:50,967 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40717,1689516949514-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 14:15:50,968 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 14:15:50,976 DEBUG [Listener at localhost/33357] zookeeper.ReadOnlyZKClient(139): Connect 0x2fe4814a to 127.0.0.1:55919 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:50,978 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:50,984 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => a7cdf9517a891cf54a2d525e238d7da0, NAME => 'hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp 2023-07-16 14:15:50,986 DEBUG [Listener at localhost/33357] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1cbd4662, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:50,990 DEBUG [hconnection-0x4411c3c2-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:50,992 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:50,993 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:50,994 INFO [Listener at localhost/33357] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:50,996 DEBUG [Listener at localhost/33357] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 14:15:50,998 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55864, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 14:15:51,002 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 14:15:51,002 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:51,002 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:51,002 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing a7cdf9517a891cf54a2d525e238d7da0, disabling compactions & flushes 2023-07-16 14:15:51,002 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,002 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,002 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. after waiting 0 ms 2023-07-16 14:15:51,002 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,002 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,002 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for a7cdf9517a891cf54a2d525e238d7da0: 2023-07-16 14:15:51,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 14:15:51,003 DEBUG [Listener at localhost/33357] zookeeper.ReadOnlyZKClient(139): Connect 0x52ea318a to 127.0.0.1:55919 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:51,005 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:51,006 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689516951006"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516951006"}]},"ts":"1689516951006"} 2023-07-16 14:15:51,009 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:51,010 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:51,010 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516951010"}]},"ts":"1689516951010"} 2023-07-16 14:15:51,011 DEBUG [Listener at localhost/33357] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18aff7c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:51,011 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-16 14:15:51,011 INFO [Listener at localhost/33357] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:55919 2023-07-16 14:15:51,015 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:51,015 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016e7d42f0000a connected 2023-07-16 14:15:51,016 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:51,016 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:51,016 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:51,016 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:51,016 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:51,016 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=a7cdf9517a891cf54a2d525e238d7da0, ASSIGN}] 2023-07-16 14:15:51,017 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=a7cdf9517a891cf54a2d525e238d7da0, ASSIGN 2023-07-16 14:15:51,019 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=a7cdf9517a891cf54a2d525e238d7da0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39377,1689516949591; forceNewPlan=false, retain=false 2023-07-16 14:15:51,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-16 14:15:51,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-16 14:15:51,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 14:15:51,033 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:51,036 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 15 msec 2023-07-16 14:15:51,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 14:15:51,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:51,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-16 14:15:51,138 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:51,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-16 14:15:51,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 14:15:51,143 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:51,144 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:51,145 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:51,147 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,148 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 empty. 2023-07-16 14:15:51,148 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,148 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 14:15:51,161 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:51,162 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0e34b288123fe8d84b9446eb350ec703, NAME => 'np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp 2023-07-16 14:15:51,169 INFO [jenkins-hbase4:40717] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:51,171 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a7cdf9517a891cf54a2d525e238d7da0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:51,171 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689516951170"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516951170"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516951170"}]},"ts":"1689516951170"} 2023-07-16 14:15:51,172 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:51,172 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 0e34b288123fe8d84b9446eb350ec703, disabling compactions & flushes 2023-07-16 14:15:51,172 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,172 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,172 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. after waiting 0 ms 2023-07-16 14:15:51,172 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,172 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure a7cdf9517a891cf54a2d525e238d7da0, server=jenkins-hbase4.apache.org,39377,1689516949591}] 2023-07-16 14:15:51,172 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,173 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 0e34b288123fe8d84b9446eb350ec703: 2023-07-16 14:15:51,175 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:51,176 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516951176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516951176"}]},"ts":"1689516951176"} 2023-07-16 14:15:51,177 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:51,177 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:51,178 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516951178"}]},"ts":"1689516951178"} 2023-07-16 14:15:51,179 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-16 14:15:51,182 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:51,182 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:51,182 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:51,182 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:51,182 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:51,182 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, ASSIGN}] 2023-07-16 14:15:51,183 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, ASSIGN 2023-07-16 14:15:51,183 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45275,1689516949681; forceNewPlan=false, retain=false 2023-07-16 14:15:51,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 14:15:51,329 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,329 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a7cdf9517a891cf54a2d525e238d7da0, NAME => 'hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:51,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:51,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,330 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,333 INFO [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,334 INFO [jenkins-hbase4:40717] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:51,335 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0e34b288123fe8d84b9446eb350ec703, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:51,335 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516951335"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516951335"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516951335"}]},"ts":"1689516951335"} 2023-07-16 14:15:51,337 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 0e34b288123fe8d84b9446eb350ec703, server=jenkins-hbase4.apache.org,45275,1689516949681}] 2023-07-16 14:15:51,342 DEBUG [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0/q 2023-07-16 14:15:51,342 DEBUG [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0/q 2023-07-16 14:15:51,342 INFO [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a7cdf9517a891cf54a2d525e238d7da0 columnFamilyName q 2023-07-16 14:15:51,343 INFO [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] regionserver.HStore(310): Store=a7cdf9517a891cf54a2d525e238d7da0/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:51,343 INFO [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,345 DEBUG [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0/u 2023-07-16 14:15:51,345 DEBUG [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0/u 2023-07-16 14:15:51,346 INFO [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a7cdf9517a891cf54a2d525e238d7da0 columnFamilyName u 2023-07-16 14:15:51,346 INFO [StoreOpener-a7cdf9517a891cf54a2d525e238d7da0-1] regionserver.HStore(310): Store=a7cdf9517a891cf54a2d525e238d7da0/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:51,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,347 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,350 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-16 14:15:51,351 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:51,354 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:51,355 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a7cdf9517a891cf54a2d525e238d7da0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9899747200, jitterRate=-0.07801419496536255}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-16 14:15:51,355 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a7cdf9517a891cf54a2d525e238d7da0: 2023-07-16 14:15:51,357 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0., pid=16, masterSystemTime=1689516951324 2023-07-16 14:15:51,359 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,359 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:51,359 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=a7cdf9517a891cf54a2d525e238d7da0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:51,359 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689516951359"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516951359"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516951359"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516951359"}]},"ts":"1689516951359"} 2023-07-16 14:15:51,362 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-16 14:15:51,363 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure a7cdf9517a891cf54a2d525e238d7da0, server=jenkins-hbase4.apache.org,39377,1689516949591 in 189 msec 2023-07-16 14:15:51,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 14:15:51,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=a7cdf9517a891cf54a2d525e238d7da0, ASSIGN in 347 msec 2023-07-16 14:15:51,365 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:51,365 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516951365"}]},"ts":"1689516951365"} 2023-07-16 14:15:51,366 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-16 14:15:51,369 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:51,371 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 415 msec 2023-07-16 14:15:51,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 14:15:51,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0e34b288123fe8d84b9446eb350ec703, NAME => 'np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:51,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:51,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,499 INFO [StoreOpener-0e34b288123fe8d84b9446eb350ec703-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,500 DEBUG [StoreOpener-0e34b288123fe8d84b9446eb350ec703-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/fam1 2023-07-16 14:15:51,500 DEBUG [StoreOpener-0e34b288123fe8d84b9446eb350ec703-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/fam1 2023-07-16 14:15:51,501 INFO [StoreOpener-0e34b288123fe8d84b9446eb350ec703-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0e34b288123fe8d84b9446eb350ec703 columnFamilyName fam1 2023-07-16 14:15:51,501 INFO [StoreOpener-0e34b288123fe8d84b9446eb350ec703-1] regionserver.HStore(310): Store=0e34b288123fe8d84b9446eb350ec703/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:51,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,511 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:51,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:51,514 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0e34b288123fe8d84b9446eb350ec703; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11464151520, jitterRate=0.06768231093883514}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:51,514 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0e34b288123fe8d84b9446eb350ec703: 2023-07-16 14:15:51,515 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703., pid=18, masterSystemTime=1689516951489 2023-07-16 14:15:51,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,516 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:51,517 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=0e34b288123fe8d84b9446eb350ec703, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:51,517 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516951517"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516951517"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516951517"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516951517"}]},"ts":"1689516951517"} 2023-07-16 14:15:51,520 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 14:15:51,520 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 0e34b288123fe8d84b9446eb350ec703, server=jenkins-hbase4.apache.org,45275,1689516949681 in 181 msec 2023-07-16 14:15:51,521 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-16 14:15:51,522 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, ASSIGN in 338 msec 2023-07-16 14:15:51,523 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:51,523 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516951523"}]},"ts":"1689516951523"} 2023-07-16 14:15:51,525 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-16 14:15:51,529 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:51,530 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 396 msec 2023-07-16 14:15:51,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-16 14:15:51,743 INFO [Listener at localhost/33357] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-16 14:15:51,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:51,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-16 14:15:51,748 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:51,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-16 14:15:51,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 14:15:51,771 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-16 14:15:51,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 14:15:51,853 INFO [Listener at localhost/33357] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-16 14:15:51,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:51,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:51,856 INFO [Listener at localhost/33357] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-16 14:15:51,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-16 14:15:51,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-16 14:15:51,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 14:15:51,859 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516951859"}]},"ts":"1689516951859"} 2023-07-16 14:15:51,860 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-16 14:15:51,862 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-16 14:15:51,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, UNASSIGN}] 2023-07-16 14:15:51,863 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, UNASSIGN 2023-07-16 14:15:51,863 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0e34b288123fe8d84b9446eb350ec703, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:51,863 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516951863"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516951863"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516951863"}]},"ts":"1689516951863"} 2023-07-16 14:15:51,864 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 0e34b288123fe8d84b9446eb350ec703, server=jenkins-hbase4.apache.org,45275,1689516949681}] 2023-07-16 14:15:51,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 14:15:52,017 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:52,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0e34b288123fe8d84b9446eb350ec703, disabling compactions & flushes 2023-07-16 14:15:52,018 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:52,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:52,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. after waiting 0 ms 2023-07-16 14:15:52,018 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:52,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:52,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703. 2023-07-16 14:15:52,022 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0e34b288123fe8d84b9446eb350ec703: 2023-07-16 14:15:52,024 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:52,024 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=0e34b288123fe8d84b9446eb350ec703, regionState=CLOSED 2023-07-16 14:15:52,024 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516952024"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516952024"}]},"ts":"1689516952024"} 2023-07-16 14:15:52,027 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-16 14:15:52,027 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 0e34b288123fe8d84b9446eb350ec703, server=jenkins-hbase4.apache.org,45275,1689516949681 in 161 msec 2023-07-16 14:15:52,028 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-16 14:15:52,028 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=0e34b288123fe8d84b9446eb350ec703, UNASSIGN in 165 msec 2023-07-16 14:15:52,029 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516952029"}]},"ts":"1689516952029"} 2023-07-16 14:15:52,039 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-16 14:15:52,040 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-16 14:15:52,043 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 185 msec 2023-07-16 14:15:52,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 14:15:52,161 INFO [Listener at localhost/33357] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-16 14:15:52,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-16 14:15:52,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-16 14:15:52,167 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 14:15:52,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-16 14:15:52,169 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 14:15:52,173 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:52,174 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:52,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:52,176 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/fam1, FileablePath, hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/recovered.edits] 2023-07-16 14:15:52,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 14:15:52,184 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/recovered.edits/4.seqid to hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/archive/data/np1/table1/0e34b288123fe8d84b9446eb350ec703/recovered.edits/4.seqid 2023-07-16 14:15:52,185 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/.tmp/data/np1/table1/0e34b288123fe8d84b9446eb350ec703 2023-07-16 14:15:52,185 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 14:15:52,187 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 14:15:52,189 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-16 14:15:52,192 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-16 14:15:52,193 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 14:15:52,194 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-16 14:15:52,194 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516952194"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:52,196 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 14:15:52,196 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 0e34b288123fe8d84b9446eb350ec703, NAME => 'np1:table1,,1689516951133.0e34b288123fe8d84b9446eb350ec703.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 14:15:52,196 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-16 14:15:52,196 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516952196"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:52,198 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-16 14:15:52,202 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 14:15:52,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 40 msec 2023-07-16 14:15:52,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 14:15:52,283 INFO [Listener at localhost/33357] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-16 14:15:52,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-16 14:15:52,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-16 14:15:52,298 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 14:15:52,301 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 14:15:52,304 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 14:15:52,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 14:15:52,305 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-16 14:15:52,305 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:52,306 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 14:15:52,307 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 14:15:52,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 18 msec 2023-07-16 14:15:52,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40717] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 14:15:52,406 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 14:15:52,406 INFO [Listener at localhost/33357] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 14:15:52,406 DEBUG [Listener at localhost/33357] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2fe4814a to 127.0.0.1:55919 2023-07-16 14:15:52,406 DEBUG [Listener at localhost/33357] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,406 DEBUG [Listener at localhost/33357] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 14:15:52,406 DEBUG [Listener at localhost/33357] util.JVMClusterUtil(257): Found active master hash=1621442243, stopped=false 2023-07-16 14:15:52,407 DEBUG [Listener at localhost/33357] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 14:15:52,407 DEBUG [Listener at localhost/33357] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 14:15:52,407 DEBUG [Listener at localhost/33357] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-16 14:15:52,407 INFO [Listener at localhost/33357] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:52,409 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:52,409 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:52,409 INFO [Listener at localhost/33357] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 14:15:52,409 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:52,409 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:52,409 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:52,410 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:52,410 DEBUG [Listener at localhost/33357] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4ab738be to 127.0.0.1:55919 2023-07-16 14:15:52,410 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:52,411 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:52,411 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:52,411 DEBUG [Listener at localhost/33357] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,411 INFO [Listener at localhost/33357] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39377,1689516949591' ***** 2023-07-16 14:15:52,411 INFO [Listener at localhost/33357] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:52,411 INFO [Listener at localhost/33357] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45275,1689516949681' ***** 2023-07-16 14:15:52,411 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:52,411 INFO [Listener at localhost/33357] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:52,416 INFO [Listener at localhost/33357] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41339,1689516949734' ***** 2023-07-16 14:15:52,416 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:52,416 INFO [Listener at localhost/33357] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:52,416 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:52,418 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:52,421 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:52,421 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:52,425 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:52,425 INFO [RS:0;jenkins-hbase4:39377] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6e29eae4{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:52,426 INFO [RS:0;jenkins-hbase4:39377] server.AbstractConnector(383): Stopped ServerConnector@673e454{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:52,426 INFO [RS:0;jenkins-hbase4:39377] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:52,426 INFO [RS:1;jenkins-hbase4:45275] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5a13fd87{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:52,426 INFO [RS:2;jenkins-hbase4:41339] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@61814198{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:52,427 INFO [RS:0;jenkins-hbase4:39377] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e0a0d01{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:52,429 INFO [RS:0;jenkins-hbase4:39377] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6495e970{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:52,429 INFO [RS:1;jenkins-hbase4:45275] server.AbstractConnector(383): Stopped ServerConnector@6cb6fe1f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:52,429 INFO [RS:2;jenkins-hbase4:41339] server.AbstractConnector(383): Stopped ServerConnector@50406226{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:52,430 INFO [RS:2;jenkins-hbase4:41339] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:52,429 INFO [RS:1;jenkins-hbase4:45275] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:52,430 INFO [RS:2;jenkins-hbase4:41339] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4b95847b{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:52,430 INFO [RS:0;jenkins-hbase4:39377] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:52,430 INFO [RS:1;jenkins-hbase4:45275] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6de35580{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:52,430 INFO [RS:0;jenkins-hbase4:39377] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:52,430 INFO [RS:2;jenkins-hbase4:41339] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6eb4e686{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:52,430 INFO [RS:1;jenkins-hbase4:45275] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@30ac444b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:52,430 INFO [RS:0;jenkins-hbase4:39377] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:52,430 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(3305): Received CLOSE for a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:52,431 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:52,431 DEBUG [RS:0;jenkins-hbase4:39377] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a0caadb to 127.0.0.1:55919 2023-07-16 14:15:52,431 DEBUG [RS:0;jenkins-hbase4:39377] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,431 INFO [RS:1;jenkins-hbase4:45275] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:52,432 INFO [RS:2;jenkins-hbase4:41339] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:52,432 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:52,432 INFO [RS:2;jenkins-hbase4:41339] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:52,432 INFO [RS:1;jenkins-hbase4:45275] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:52,432 INFO [RS:0;jenkins-hbase4:39377] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:52,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a7cdf9517a891cf54a2d525e238d7da0, disabling compactions & flushes 2023-07-16 14:15:52,433 INFO [RS:0;jenkins-hbase4:39377] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:52,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:52,433 INFO [RS:1;jenkins-hbase4:45275] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:52,433 INFO [RS:2;jenkins-hbase4:41339] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:52,433 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(3305): Received CLOSE for 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:52,433 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(3305): Received CLOSE for de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:52,433 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:52,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:52,433 INFO [RS:0;jenkins-hbase4:39377] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:52,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5a995ca3399f240e4fe310538087e5d0, disabling compactions & flushes 2023-07-16 14:15:52,433 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. after waiting 0 ms 2023-07-16 14:15:52,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:52,435 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de8b969996e57525828002bb0b2d24b3, disabling compactions & flushes 2023-07-16 14:15:52,433 DEBUG [RS:1;jenkins-hbase4:45275] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x29b0d739 to 127.0.0.1:55919 2023-07-16 14:15:52,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:52,435 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:52,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:52,435 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:52,436 DEBUG [RS:2;jenkins-hbase4:41339] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1f02cc97 to 127.0.0.1:55919 2023-07-16 14:15:52,435 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 14:15:52,436 DEBUG [RS:2;jenkins-hbase4:41339] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. after waiting 0 ms 2023-07-16 14:15:52,436 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 14:15:52,436 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1478): Online Regions={a7cdf9517a891cf54a2d525e238d7da0=hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0., 1588230740=hbase:meta,,1.1588230740} 2023-07-16 14:15:52,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:52,437 DEBUG [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1504): Waiting on 1588230740, a7cdf9517a891cf54a2d525e238d7da0 2023-07-16 14:15:52,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:52,435 DEBUG [RS:1;jenkins-hbase4:45275] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. after waiting 0 ms 2023-07-16 14:15:52,437 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 14:15:52,437 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:52,437 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1478): Online Regions={5a995ca3399f240e4fe310538087e5d0=hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0.} 2023-07-16 14:15:52,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:52,436 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 14:15:52,437 DEBUG [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1504): Waiting on 5a995ca3399f240e4fe310538087e5d0 2023-07-16 14:15:52,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing de8b969996e57525828002bb0b2d24b3 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-16 14:15:52,437 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1478): Online Regions={de8b969996e57525828002bb0b2d24b3=hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3.} 2023-07-16 14:15:52,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:52,437 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:52,437 DEBUG [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1504): Waiting on de8b969996e57525828002bb0b2d24b3 2023-07-16 14:15:52,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:52,437 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5a995ca3399f240e4fe310538087e5d0 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-16 14:15:52,437 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:52,438 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-16 14:15:52,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/quota/a7cdf9517a891cf54a2d525e238d7da0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:52,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:52,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a7cdf9517a891cf54a2d525e238d7da0: 2023-07-16 14:15:52,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689516950954.a7cdf9517a891cf54a2d525e238d7da0. 2023-07-16 14:15:52,463 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/.tmp/info/6876956d40894c7da74112fcc8f4be2b 2023-07-16 14:15:52,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/.tmp/m/92ffb5fdcba04b2bb390e7484bd57d66 2023-07-16 14:15:52,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/.tmp/info/cecec9af44d540fdb4543a53a53ad615 2023-07-16 14:15:52,471 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6876956d40894c7da74112fcc8f4be2b 2023-07-16 14:15:52,477 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/.tmp/m/92ffb5fdcba04b2bb390e7484bd57d66 as hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/m/92ffb5fdcba04b2bb390e7484bd57d66 2023-07-16 14:15:52,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cecec9af44d540fdb4543a53a53ad615 2023-07-16 14:15:52,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/.tmp/info/cecec9af44d540fdb4543a53a53ad615 as hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/info/cecec9af44d540fdb4543a53a53ad615 2023-07-16 14:15:52,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cecec9af44d540fdb4543a53a53ad615 2023-07-16 14:15:52,485 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/info/cecec9af44d540fdb4543a53a53ad615, entries=3, sequenceid=8, filesize=5.0 K 2023-07-16 14:15:52,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/m/92ffb5fdcba04b2bb390e7484bd57d66, entries=1, sequenceid=7, filesize=4.9 K 2023-07-16 14:15:52,490 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/.tmp/rep_barrier/39d665b74e174900878b4e960e8cad43 2023-07-16 14:15:52,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for de8b969996e57525828002bb0b2d24b3 in 54ms, sequenceid=8, compaction requested=false 2023-07-16 14:15:52,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 14:15:52,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 5a995ca3399f240e4fe310538087e5d0 in 54ms, sequenceid=7, compaction requested=false 2023-07-16 14:15:52,491 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 14:15:52,497 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39d665b74e174900878b4e960e8cad43 2023-07-16 14:15:52,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/namespace/de8b969996e57525828002bb0b2d24b3/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-16 14:15:52,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/rsgroup/5a995ca3399f240e4fe310538087e5d0/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-16 14:15:52,508 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:52,508 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:52,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de8b969996e57525828002bb0b2d24b3: 2023-07-16 14:15:52,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689516950590.de8b969996e57525828002bb0b2d24b3. 2023-07-16 14:15:52,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:52,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:52,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5a995ca3399f240e4fe310538087e5d0: 2023-07-16 14:15:52,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689516950583.5a995ca3399f240e4fe310538087e5d0. 2023-07-16 14:15:52,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/.tmp/table/f515ffc46d764c62be48a24cb782f478 2023-07-16 14:15:52,525 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f515ffc46d764c62be48a24cb782f478 2023-07-16 14:15:52,526 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/.tmp/info/6876956d40894c7da74112fcc8f4be2b as hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/info/6876956d40894c7da74112fcc8f4be2b 2023-07-16 14:15:52,536 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6876956d40894c7da74112fcc8f4be2b 2023-07-16 14:15:52,536 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/info/6876956d40894c7da74112fcc8f4be2b, entries=32, sequenceid=31, filesize=8.5 K 2023-07-16 14:15:52,537 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/.tmp/rep_barrier/39d665b74e174900878b4e960e8cad43 as hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/rep_barrier/39d665b74e174900878b4e960e8cad43 2023-07-16 14:15:52,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 39d665b74e174900878b4e960e8cad43 2023-07-16 14:15:52,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/rep_barrier/39d665b74e174900878b4e960e8cad43, entries=1, sequenceid=31, filesize=4.9 K 2023-07-16 14:15:52,545 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/.tmp/table/f515ffc46d764c62be48a24cb782f478 as hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/table/f515ffc46d764c62be48a24cb782f478 2023-07-16 14:15:52,617 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f515ffc46d764c62be48a24cb782f478 2023-07-16 14:15:52,617 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/table/f515ffc46d764c62be48a24cb782f478, entries=8, sequenceid=31, filesize=5.2 K 2023-07-16 14:15:52,618 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 181ms, sequenceid=31, compaction requested=false 2023-07-16 14:15:52,618 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 14:15:52,632 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-16 14:15:52,632 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:52,632 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:52,632 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:52,633 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:52,637 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39377,1689516949591; all regions closed. 2023-07-16 14:15:52,637 DEBUG [RS:0;jenkins-hbase4:39377] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 14:15:52,637 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45275,1689516949681; all regions closed. 2023-07-16 14:15:52,637 DEBUG [RS:1;jenkins-hbase4:45275] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 14:15:52,637 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41339,1689516949734; all regions closed. 2023-07-16 14:15:52,637 DEBUG [RS:2;jenkins-hbase4:41339] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 14:15:52,651 DEBUG [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs 2023-07-16 14:15:52,652 INFO [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39377%2C1689516949591.meta:.meta(num 1689516950530) 2023-07-16 14:15:52,654 DEBUG [RS:2;jenkins-hbase4:41339] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs 2023-07-16 14:15:52,654 INFO [RS:2;jenkins-hbase4:41339] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41339%2C1689516949734:(num 1689516950312) 2023-07-16 14:15:52,654 DEBUG [RS:2;jenkins-hbase4:41339] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,655 INFO [RS:2;jenkins-hbase4:41339] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:52,655 INFO [RS:2;jenkins-hbase4:41339] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:52,655 INFO [RS:2;jenkins-hbase4:41339] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:52,655 INFO [RS:2;jenkins-hbase4:41339] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:52,655 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:52,655 INFO [RS:2;jenkins-hbase4:41339] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:52,657 INFO [RS:2;jenkins-hbase4:41339] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41339 2023-07-16 14:15:52,660 DEBUG [RS:1;jenkins-hbase4:45275] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs 2023-07-16 14:15:52,660 INFO [RS:1;jenkins-hbase4:45275] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45275%2C1689516949681:(num 1689516950312) 2023-07-16 14:15:52,660 DEBUG [RS:1;jenkins-hbase4:45275] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,660 INFO [RS:1;jenkins-hbase4:45275] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:52,660 INFO [RS:1;jenkins-hbase4:45275] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:52,661 INFO [RS:1;jenkins-hbase4:45275] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:52,661 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:52,661 INFO [RS:1;jenkins-hbase4:45275] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:52,661 INFO [RS:1;jenkins-hbase4:45275] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41339,1689516949734 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:52,662 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:52,665 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41339,1689516949734] 2023-07-16 14:15:52,665 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41339,1689516949734; numProcessing=1 2023-07-16 14:15:52,666 INFO [RS:1;jenkins-hbase4:45275] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45275 2023-07-16 14:15:52,667 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41339,1689516949734 already deleted, retry=false 2023-07-16 14:15:52,667 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41339,1689516949734 expired; onlineServers=2 2023-07-16 14:15:52,668 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:52,668 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45275,1689516949681 2023-07-16 14:15:52,669 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:52,671 DEBUG [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/oldWALs 2023-07-16 14:15:52,672 INFO [RS:0;jenkins-hbase4:39377] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39377%2C1689516949591:(num 1689516950311) 2023-07-16 14:15:52,672 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45275,1689516949681] 2023-07-16 14:15:52,672 DEBUG [RS:0;jenkins-hbase4:39377] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,672 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45275,1689516949681; numProcessing=2 2023-07-16 14:15:52,672 INFO [RS:0;jenkins-hbase4:39377] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:52,672 INFO [RS:0;jenkins-hbase4:39377] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:52,672 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:52,673 INFO [RS:0;jenkins-hbase4:39377] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39377 2023-07-16 14:15:52,676 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45275,1689516949681 already deleted, retry=false 2023-07-16 14:15:52,676 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45275,1689516949681 expired; onlineServers=1 2023-07-16 14:15:52,677 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39377,1689516949591 2023-07-16 14:15:52,677 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:52,678 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39377,1689516949591] 2023-07-16 14:15:52,678 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39377,1689516949591; numProcessing=3 2023-07-16 14:15:52,680 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39377,1689516949591 already deleted, retry=false 2023-07-16 14:15:52,680 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39377,1689516949591 expired; onlineServers=0 2023-07-16 14:15:52,680 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40717,1689516949514' ***** 2023-07-16 14:15:52,681 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 14:15:52,681 DEBUG [M:0;jenkins-hbase4:40717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@649ab60f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:52,681 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:52,683 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:52,683 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:52,683 INFO [M:0;jenkins-hbase4:40717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b9bbe66{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 14:15:52,684 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:52,684 INFO [M:0;jenkins-hbase4:40717] server.AbstractConnector(383): Stopped ServerConnector@38d09a26{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:52,684 INFO [M:0;jenkins-hbase4:40717] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:52,684 INFO [M:0;jenkins-hbase4:40717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6eca1326{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:52,685 INFO [M:0;jenkins-hbase4:40717] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33eaf445{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:52,685 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40717,1689516949514 2023-07-16 14:15:52,685 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40717,1689516949514; all regions closed. 2023-07-16 14:15:52,685 DEBUG [M:0;jenkins-hbase4:40717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:52,686 INFO [M:0;jenkins-hbase4:40717] master.HMaster(1491): Stopping master jetty server 2023-07-16 14:15:52,687 INFO [M:0;jenkins-hbase4:40717] server.AbstractConnector(383): Stopped ServerConnector@7af59d75{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:52,687 DEBUG [M:0;jenkins-hbase4:40717] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 14:15:52,687 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 14:15:52,687 DEBUG [M:0;jenkins-hbase4:40717] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 14:15:52,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516950032] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516950032,5,FailOnTimeoutGroup] 2023-07-16 14:15:52,688 INFO [M:0;jenkins-hbase4:40717] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 14:15:52,688 INFO [M:0;jenkins-hbase4:40717] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 14:15:52,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516950031] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516950031,5,FailOnTimeoutGroup] 2023-07-16 14:15:52,690 INFO [M:0;jenkins-hbase4:40717] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:52,690 DEBUG [M:0;jenkins-hbase4:40717] master.HMaster(1512): Stopping service threads 2023-07-16 14:15:52,690 INFO [M:0;jenkins-hbase4:40717] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 14:15:52,690 ERROR [M:0;jenkins-hbase4:40717] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 14:15:52,691 INFO [M:0;jenkins-hbase4:40717] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 14:15:52,691 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 14:15:52,692 DEBUG [M:0;jenkins-hbase4:40717] zookeeper.ZKUtil(398): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 14:15:52,692 WARN [M:0;jenkins-hbase4:40717] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 14:15:52,692 INFO [M:0;jenkins-hbase4:40717] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 14:15:52,692 INFO [M:0;jenkins-hbase4:40717] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 14:15:52,692 DEBUG [M:0;jenkins-hbase4:40717] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 14:15:52,693 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:52,693 DEBUG [M:0;jenkins-hbase4:40717] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:52,693 DEBUG [M:0;jenkins-hbase4:40717] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 14:15:52,693 DEBUG [M:0;jenkins-hbase4:40717] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:52,693 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-16 14:15:52,710 INFO [M:0;jenkins-hbase4:40717] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6120ae6e32b24defb806d8c364971752 2023-07-16 14:15:52,718 DEBUG [M:0;jenkins-hbase4:40717] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/6120ae6e32b24defb806d8c364971752 as hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6120ae6e32b24defb806d8c364971752 2023-07-16 14:15:52,723 INFO [M:0;jenkins-hbase4:40717] regionserver.HStore(1080): Added hdfs://localhost:43571/user/jenkins/test-data/f31c0053-c1b4-ebea-c613-3bc13263b593/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/6120ae6e32b24defb806d8c364971752, entries=24, sequenceid=194, filesize=12.4 K 2023-07-16 14:15:52,725 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95208, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=194, compaction requested=false 2023-07-16 14:15:52,729 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:52,729 DEBUG [M:0;jenkins-hbase4:40717] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:52,736 INFO [M:0;jenkins-hbase4:40717] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 14:15:52,736 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:52,737 INFO [M:0;jenkins-hbase4:40717] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40717 2023-07-16 14:15:52,738 DEBUG [M:0;jenkins-hbase4:40717] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40717,1689516949514 already deleted, retry=false 2023-07-16 14:15:52,810 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:52,810 INFO [RS:0;jenkins-hbase4:39377] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39377,1689516949591; zookeeper connection closed. 2023-07-16 14:15:52,811 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:39377-0x1016e7d42f00001, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:52,812 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4d02c25f] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4d02c25f 2023-07-16 14:15:52,911 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:52,911 INFO [RS:1;jenkins-hbase4:45275] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45275,1689516949681; zookeeper connection closed. 2023-07-16 14:15:52,911 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:45275-0x1016e7d42f00002, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:52,913 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@49b98561] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@49b98561 2023-07-16 14:15:53,011 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:53,011 INFO [RS:2;jenkins-hbase4:41339] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41339,1689516949734; zookeeper connection closed. 2023-07-16 14:15:53,011 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): regionserver:41339-0x1016e7d42f00003, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:53,012 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1a20c36a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1a20c36a 2023-07-16 14:15:53,012 INFO [Listener at localhost/33357] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-16 14:15:53,111 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:53,111 INFO [M:0;jenkins-hbase4:40717] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40717,1689516949514; zookeeper connection closed. 2023-07-16 14:15:53,111 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): master:40717-0x1016e7d42f00000, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:53,112 WARN [Listener at localhost/33357] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:53,116 INFO [Listener at localhost/33357] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:53,221 WARN [BP-1196093704-172.31.14.131-1689516948589 heartbeating to localhost/127.0.0.1:43571] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:53,222 WARN [BP-1196093704-172.31.14.131-1689516948589 heartbeating to localhost/127.0.0.1:43571] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1196093704-172.31.14.131-1689516948589 (Datanode Uuid 96d4de68-3ce3-4be5-9a00-28d08ffc636d) service to localhost/127.0.0.1:43571 2023-07-16 14:15:53,222 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/dfs/data/data5/current/BP-1196093704-172.31.14.131-1689516948589] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:53,223 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/dfs/data/data6/current/BP-1196093704-172.31.14.131-1689516948589] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:53,225 WARN [Listener at localhost/33357] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:53,228 INFO [Listener at localhost/33357] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:53,331 WARN [BP-1196093704-172.31.14.131-1689516948589 heartbeating to localhost/127.0.0.1:43571] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:53,331 WARN [BP-1196093704-172.31.14.131-1689516948589 heartbeating to localhost/127.0.0.1:43571] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1196093704-172.31.14.131-1689516948589 (Datanode Uuid 024ebe1f-3d61-4cbe-b799-05f6b06393a7) service to localhost/127.0.0.1:43571 2023-07-16 14:15:53,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/dfs/data/data3/current/BP-1196093704-172.31.14.131-1689516948589] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:53,332 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/dfs/data/data4/current/BP-1196093704-172.31.14.131-1689516948589] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:53,333 WARN [Listener at localhost/33357] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:53,338 INFO [Listener at localhost/33357] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:53,440 WARN [BP-1196093704-172.31.14.131-1689516948589 heartbeating to localhost/127.0.0.1:43571] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:53,440 WARN [BP-1196093704-172.31.14.131-1689516948589 heartbeating to localhost/127.0.0.1:43571] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1196093704-172.31.14.131-1689516948589 (Datanode Uuid 32f03ae5-da48-46c0-a229-c6545176d025) service to localhost/127.0.0.1:43571 2023-07-16 14:15:53,441 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/dfs/data/data1/current/BP-1196093704-172.31.14.131-1689516948589] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:53,441 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/cluster_fe4796b1-8364-c2f8-a5a0-c2324da1c852/dfs/data/data2/current/BP-1196093704-172.31.14.131-1689516948589] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:53,450 INFO [Listener at localhost/33357] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:53,566 INFO [Listener at localhost/33357] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 14:15:53,592 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 14:15:53,592 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 14:15:53,592 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.log.dir so I do NOT create it in target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13 2023-07-16 14:15:53,592 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/f11a94b6-5466-e4bc-3613-3aff015df267/hadoop.tmp.dir so I do NOT create it in target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13 2023-07-16 14:15:53,592 INFO [Listener at localhost/33357] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd, deleteOnExit=true 2023-07-16 14:15:53,592 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/test.cache.data in system properties and HBase conf 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir in system properties and HBase conf 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 14:15:53,593 DEBUG [Listener at localhost/33357] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 14:15:53,593 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 14:15:53,594 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 14:15:53,595 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/nfs.dump.dir in system properties and HBase conf 2023-07-16 14:15:53,595 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/java.io.tmpdir in system properties and HBase conf 2023-07-16 14:15:53,595 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 14:15:53,595 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 14:15:53,595 INFO [Listener at localhost/33357] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 14:15:53,599 WARN [Listener at localhost/33357] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 14:15:53,599 WARN [Listener at localhost/33357] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 14:15:53,646 WARN [Listener at localhost/33357] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:53,648 INFO [Listener at localhost/33357] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:53,655 INFO [Listener at localhost/33357] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/java.io.tmpdir/Jetty_localhost_38049_hdfs____.2g7acm/webapp 2023-07-16 14:15:53,664 DEBUG [Listener at localhost/33357-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016e7d42f0000a, quorum=127.0.0.1:55919, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 14:15:53,664 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016e7d42f0000a, quorum=127.0.0.1:55919, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 14:15:53,752 INFO [Listener at localhost/33357] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38049 2023-07-16 14:15:53,757 WARN [Listener at localhost/33357] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 14:15:53,758 WARN [Listener at localhost/33357] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 14:15:53,808 WARN [Listener at localhost/33443] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:53,830 WARN [Listener at localhost/33443] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:53,832 WARN [Listener at localhost/33443] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:53,833 INFO [Listener at localhost/33443] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:53,840 INFO [Listener at localhost/33443] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/java.io.tmpdir/Jetty_localhost_46309_datanode____.6e7rdr/webapp 2023-07-16 14:15:53,933 INFO [Listener at localhost/33443] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46309 2023-07-16 14:15:53,941 WARN [Listener at localhost/36335] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:53,967 WARN [Listener at localhost/36335] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:53,969 WARN [Listener at localhost/36335] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:53,970 INFO [Listener at localhost/36335] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:53,973 INFO [Listener at localhost/36335] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/java.io.tmpdir/Jetty_localhost_35165_datanode____.tx84cr/webapp 2023-07-16 14:15:54,055 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x769ec219989ff5c0: Processing first storage report for DS-89e45247-2462-4760-bb6a-8c1af26f5d0f from datanode 79c2d733-84c9-4322-85e5-83cfb545620f 2023-07-16 14:15:54,055 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x769ec219989ff5c0: from storage DS-89e45247-2462-4760-bb6a-8c1af26f5d0f node DatanodeRegistration(127.0.0.1:40721, datanodeUuid=79c2d733-84c9-4322-85e5-83cfb545620f, infoPort=41991, infoSecurePort=0, ipcPort=36335, storageInfo=lv=-57;cid=testClusterID;nsid=458941416;c=1689516953602), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:54,055 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x769ec219989ff5c0: Processing first storage report for DS-5b971a87-fc2b-483c-93e4-b0eb52e83561 from datanode 79c2d733-84c9-4322-85e5-83cfb545620f 2023-07-16 14:15:54,055 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x769ec219989ff5c0: from storage DS-5b971a87-fc2b-483c-93e4-b0eb52e83561 node DatanodeRegistration(127.0.0.1:40721, datanodeUuid=79c2d733-84c9-4322-85e5-83cfb545620f, infoPort=41991, infoSecurePort=0, ipcPort=36335, storageInfo=lv=-57;cid=testClusterID;nsid=458941416;c=1689516953602), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:54,077 INFO [Listener at localhost/36335] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35165 2023-07-16 14:15:54,084 WARN [Listener at localhost/45629] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:54,097 WARN [Listener at localhost/45629] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 14:15:54,100 WARN [Listener at localhost/45629] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 14:15:54,101 INFO [Listener at localhost/45629] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 14:15:54,105 INFO [Listener at localhost/45629] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/java.io.tmpdir/Jetty_localhost_46083_datanode____n3h0b4/webapp 2023-07-16 14:15:54,179 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8bcc21337d9876cb: Processing first storage report for DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2 from datanode aa917b2b-c86d-4321-b990-7013d3a67aa1 2023-07-16 14:15:54,179 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8bcc21337d9876cb: from storage DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2 node DatanodeRegistration(127.0.0.1:36819, datanodeUuid=aa917b2b-c86d-4321-b990-7013d3a67aa1, infoPort=39715, infoSecurePort=0, ipcPort=45629, storageInfo=lv=-57;cid=testClusterID;nsid=458941416;c=1689516953602), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:54,179 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8bcc21337d9876cb: Processing first storage report for DS-b9651e8b-715e-449a-9910-50d9523151e7 from datanode aa917b2b-c86d-4321-b990-7013d3a67aa1 2023-07-16 14:15:54,179 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8bcc21337d9876cb: from storage DS-b9651e8b-715e-449a-9910-50d9523151e7 node DatanodeRegistration(127.0.0.1:36819, datanodeUuid=aa917b2b-c86d-4321-b990-7013d3a67aa1, infoPort=39715, infoSecurePort=0, ipcPort=45629, storageInfo=lv=-57;cid=testClusterID;nsid=458941416;c=1689516953602), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:54,211 INFO [Listener at localhost/45629] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46083 2023-07-16 14:15:54,253 WARN [Listener at localhost/37985] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 14:15:54,355 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb776124bb57bc201: Processing first storage report for DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a from datanode 8e1c8203-2a11-434a-b6c3-f81853251689 2023-07-16 14:15:54,355 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb776124bb57bc201: from storage DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a node DatanodeRegistration(127.0.0.1:43309, datanodeUuid=8e1c8203-2a11-434a-b6c3-f81853251689, infoPort=42597, infoSecurePort=0, ipcPort=37985, storageInfo=lv=-57;cid=testClusterID;nsid=458941416;c=1689516953602), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:54,355 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb776124bb57bc201: Processing first storage report for DS-a7ce527a-8d46-4e0a-9fd3-2ebe18dadef3 from datanode 8e1c8203-2a11-434a-b6c3-f81853251689 2023-07-16 14:15:54,355 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb776124bb57bc201: from storage DS-a7ce527a-8d46-4e0a-9fd3-2ebe18dadef3 node DatanodeRegistration(127.0.0.1:43309, datanodeUuid=8e1c8203-2a11-434a-b6c3-f81853251689, infoPort=42597, infoSecurePort=0, ipcPort=37985, storageInfo=lv=-57;cid=testClusterID;nsid=458941416;c=1689516953602), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 14:15:54,361 DEBUG [Listener at localhost/37985] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13 2023-07-16 14:15:54,363 INFO [Listener at localhost/37985] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/zookeeper_0, clientPort=50636, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 14:15:54,364 INFO [Listener at localhost/37985] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50636 2023-07-16 14:15:54,365 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,365 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,383 INFO [Listener at localhost/37985] util.FSUtils(471): Created version file at hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0 with version=8 2023-07-16 14:15:54,384 INFO [Listener at localhost/37985] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:42609/user/jenkins/test-data/bc424748-3584-1e4f-f5af-d2d8151f48a1/hbase-staging 2023-07-16 14:15:54,385 DEBUG [Listener at localhost/37985] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 14:15:54,385 DEBUG [Listener at localhost/37985] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 14:15:54,385 DEBUG [Listener at localhost/37985] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 14:15:54,385 DEBUG [Listener at localhost/37985] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:54,386 INFO [Listener at localhost/37985] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:54,387 INFO [Listener at localhost/37985] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42483 2023-07-16 14:15:54,387 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,388 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,389 INFO [Listener at localhost/37985] zookeeper.RecoverableZooKeeper(93): Process identifier=master:42483 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-07-16 14:15:54,396 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:424830x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:54,397 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:42483-0x1016e7d55fe0000 connected 2023-07-16 14:15:54,416 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:54,416 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:54,417 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:54,417 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42483 2023-07-16 14:15:54,417 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42483 2023-07-16 14:15:54,418 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42483 2023-07-16 14:15:54,418 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42483 2023-07-16 14:15:54,418 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42483 2023-07-16 14:15:54,420 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:54,420 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:54,421 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:54,421 INFO [Listener at localhost/37985] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 14:15:54,421 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:54,421 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:54,421 INFO [Listener at localhost/37985] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:54,422 INFO [Listener at localhost/37985] http.HttpServer(1146): Jetty bound to port 42119 2023-07-16 14:15:54,422 INFO [Listener at localhost/37985] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:54,427 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,427 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@633dccf4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:54,428 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,428 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69823956{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:54,434 INFO [Listener at localhost/37985] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:54,435 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:54,435 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:54,435 INFO [Listener at localhost/37985] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:54,437 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,439 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@105e13de{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 14:15:54,440 INFO [Listener at localhost/37985] server.AbstractConnector(333): Started ServerConnector@22366487{HTTP/1.1, (http/1.1)}{0.0.0.0:42119} 2023-07-16 14:15:54,440 INFO [Listener at localhost/37985] server.Server(415): Started @42030ms 2023-07-16 14:15:54,440 INFO [Listener at localhost/37985] master.HMaster(444): hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0, hbase.cluster.distributed=false 2023-07-16 14:15:54,455 INFO [Listener at localhost/37985] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:54,455 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,455 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,455 INFO [Listener at localhost/37985] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:54,455 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,455 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:54,456 INFO [Listener at localhost/37985] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:54,456 INFO [Listener at localhost/37985] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36389 2023-07-16 14:15:54,456 INFO [Listener at localhost/37985] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:54,458 DEBUG [Listener at localhost/37985] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:54,459 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,460 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,461 INFO [Listener at localhost/37985] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36389 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-07-16 14:15:54,464 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:363890x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:54,465 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:363890x0, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:54,466 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36389-0x1016e7d55fe0001 connected 2023-07-16 14:15:54,466 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:54,466 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:54,470 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36389 2023-07-16 14:15:54,471 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36389 2023-07-16 14:15:54,471 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36389 2023-07-16 14:15:54,471 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36389 2023-07-16 14:15:54,471 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36389 2023-07-16 14:15:54,473 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:54,473 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:54,473 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:54,474 INFO [Listener at localhost/37985] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:54,474 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:54,474 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:54,474 INFO [Listener at localhost/37985] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:54,474 INFO [Listener at localhost/37985] http.HttpServer(1146): Jetty bound to port 36257 2023-07-16 14:15:54,474 INFO [Listener at localhost/37985] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:54,476 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,476 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@666a8c86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:54,477 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,477 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@39bfefc6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:54,481 INFO [Listener at localhost/37985] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:54,482 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:54,482 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:54,482 INFO [Listener at localhost/37985] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 14:15:54,483 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,484 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2ab355a6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:54,485 INFO [Listener at localhost/37985] server.AbstractConnector(333): Started ServerConnector@30ce9388{HTTP/1.1, (http/1.1)}{0.0.0.0:36257} 2023-07-16 14:15:54,485 INFO [Listener at localhost/37985] server.Server(415): Started @42074ms 2023-07-16 14:15:54,495 INFO [Listener at localhost/37985] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:54,496 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,496 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,496 INFO [Listener at localhost/37985] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:54,496 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,496 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:54,496 INFO [Listener at localhost/37985] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:54,497 INFO [Listener at localhost/37985] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33211 2023-07-16 14:15:54,497 INFO [Listener at localhost/37985] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:54,498 DEBUG [Listener at localhost/37985] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:54,499 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,500 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,501 INFO [Listener at localhost/37985] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33211 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-07-16 14:15:54,504 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:332110x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:54,505 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:332110x0, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:54,505 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33211-0x1016e7d55fe0002 connected 2023-07-16 14:15:54,506 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:54,506 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:54,511 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33211 2023-07-16 14:15:54,511 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33211 2023-07-16 14:15:54,511 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33211 2023-07-16 14:15:54,511 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33211 2023-07-16 14:15:54,512 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33211 2023-07-16 14:15:54,514 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:54,514 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:54,514 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:54,514 INFO [Listener at localhost/37985] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:54,515 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:54,515 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:54,515 INFO [Listener at localhost/37985] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:54,515 INFO [Listener at localhost/37985] http.HttpServer(1146): Jetty bound to port 44877 2023-07-16 14:15:54,515 INFO [Listener at localhost/37985] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:54,517 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,517 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@730a2ea8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:54,518 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,518 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ceb7e75{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:54,522 INFO [Listener at localhost/37985] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:54,522 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:54,523 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:54,523 INFO [Listener at localhost/37985] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:54,523 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,524 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@24fd7fc7{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:54,526 INFO [Listener at localhost/37985] server.AbstractConnector(333): Started ServerConnector@66049cc4{HTTP/1.1, (http/1.1)}{0.0.0.0:44877} 2023-07-16 14:15:54,527 INFO [Listener at localhost/37985] server.Server(415): Started @42116ms 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:54,538 INFO [Listener at localhost/37985] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:54,541 INFO [Listener at localhost/37985] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35057 2023-07-16 14:15:54,541 INFO [Listener at localhost/37985] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:54,542 DEBUG [Listener at localhost/37985] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:54,542 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,543 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,544 INFO [Listener at localhost/37985] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35057 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-07-16 14:15:54,548 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:350570x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:54,549 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35057-0x1016e7d55fe0003 connected 2023-07-16 14:15:54,549 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:54,549 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:54,550 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:54,550 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35057 2023-07-16 14:15:54,551 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35057 2023-07-16 14:15:54,552 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35057 2023-07-16 14:15:54,555 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35057 2023-07-16 14:15:54,556 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35057 2023-07-16 14:15:54,558 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:54,558 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:54,558 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:54,559 INFO [Listener at localhost/37985] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:54,559 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:54,559 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:54,559 INFO [Listener at localhost/37985] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:54,560 INFO [Listener at localhost/37985] http.HttpServer(1146): Jetty bound to port 36451 2023-07-16 14:15:54,560 INFO [Listener at localhost/37985] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:54,562 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,563 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@12ca8bea{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:54,563 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,563 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@15be34c0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:54,568 INFO [Listener at localhost/37985] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:54,569 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:54,569 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:54,569 INFO [Listener at localhost/37985] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:54,570 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:54,571 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@164f60c8{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:54,573 INFO [Listener at localhost/37985] server.AbstractConnector(333): Started ServerConnector@b2fe258{HTTP/1.1, (http/1.1)}{0.0.0.0:36451} 2023-07-16 14:15:54,573 INFO [Listener at localhost/37985] server.Server(415): Started @42163ms 2023-07-16 14:15:54,575 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:54,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@35983a9a{HTTP/1.1, (http/1.1)}{0.0.0.0:39893} 2023-07-16 14:15:54,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @42171ms 2023-07-16 14:15:54,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,584 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 14:15:54,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,586 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:54,586 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:54,586 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:54,586 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:54,587 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,589 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:54,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,42483,1689516954385 from backup master directory 2023-07-16 14:15:54,590 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:54,591 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,591 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 14:15:54,591 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:54,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,606 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/hbase.id with ID: 06c18206-c009-4e02-8c4f-4260c4018c5a 2023-07-16 14:15:54,616 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:54,618 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x22b8bb45 to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:54,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ea65563, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:54,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:54,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 14:15:54,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:54,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store-tmp 2023-07-16 14:15:54,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:54,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 14:15:54,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:54,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:54,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 14:15:54,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:54,645 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:54,645 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:54,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/WALs/jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,648 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42483%2C1689516954385, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/WALs/jenkins-hbase4.apache.org,42483,1689516954385, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/oldWALs, maxLogs=10 2023-07-16 14:15:54,665 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK] 2023-07-16 14:15:54,665 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK] 2023-07-16 14:15:54,665 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK] 2023-07-16 14:15:54,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/WALs/jenkins-hbase4.apache.org,42483,1689516954385/jenkins-hbase4.apache.org%2C42483%2C1689516954385.1689516954648 2023-07-16 14:15:54,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK], DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK], DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK]] 2023-07-16 14:15:54,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:54,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:54,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:54,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:54,671 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:54,673 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 14:15:54,673 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 14:15:54,673 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:54,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:54,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:54,678 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 14:15:54,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:54,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11099956480, jitterRate=0.033764004707336426}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:54,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:54,684 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 14:15:54,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 14:15:54,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 14:15:54,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 14:15:54,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 14:15:54,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 14:15:54,686 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 14:15:54,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 14:15:54,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 14:15:54,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 14:15:54,688 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 14:15:54,689 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 14:15:54,691 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 14:15:54,691 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 14:15:54,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 14:15:54,693 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:54,693 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:54,693 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:54,693 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:54,694 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,694 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,42483,1689516954385, sessionid=0x1016e7d55fe0000, setting cluster-up flag (Was=false) 2023-07-16 14:15:54,700 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,704 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 14:15:54,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,709 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 14:15:54,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:54,715 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.hbase-snapshot/.tmp 2023-07-16 14:15:54,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 14:15:54,716 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 14:15:54,717 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 14:15:54,717 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:54,718 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 14:15:54,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:54,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 14:15:54,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 14:15:54,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 14:15:54,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:54,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689516984740 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 14:15:54,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,740 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:54,741 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 14:15:54,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 14:15:54,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 14:15:54,741 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 14:15:54,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 14:15:54,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 14:15:54,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516954742,5,FailOnTimeoutGroup] 2023-07-16 14:15:54,742 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516954742,5,FailOnTimeoutGroup] 2023-07-16 14:15:54,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 14:15:54,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,742 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,743 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:54,758 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:54,758 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:54,758 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0 2023-07-16 14:15:54,773 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:54,776 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(951): ClusterId : 06c18206-c009-4e02-8c4f-4260c4018c5a 2023-07-16 14:15:54,776 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(951): ClusterId : 06c18206-c009-4e02-8c4f-4260c4018c5a 2023-07-16 14:15:54,776 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(951): ClusterId : 06c18206-c009-4e02-8c4f-4260c4018c5a 2023-07-16 14:15:54,780 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:54,780 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:54,780 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:54,780 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:54,782 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/info 2023-07-16 14:15:54,783 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:54,783 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:54,783 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:54,783 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:54,783 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:54,783 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:54,783 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:54,783 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:54,784 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:54,785 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:54,785 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:54,786 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:54,786 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:54,786 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:54,788 DEBUG [RS:0;jenkins-hbase4:36389] zookeeper.ReadOnlyZKClient(139): Connect 0x6d664fe7 to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:54,788 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:54,788 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:54,792 DEBUG [RS:2;jenkins-hbase4:35057] zookeeper.ReadOnlyZKClient(139): Connect 0x445c719d to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:54,792 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/table 2023-07-16 14:15:54,792 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:54,792 DEBUG [RS:1;jenkins-hbase4:33211] zookeeper.ReadOnlyZKClient(139): Connect 0x63898e27 to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:54,793 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:54,796 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740 2023-07-16 14:15:54,800 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740 2023-07-16 14:15:54,802 DEBUG [RS:0;jenkins-hbase4:36389] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@63a1ac3b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:54,803 DEBUG [RS:0;jenkins-hbase4:36389] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@679e2497, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:54,804 DEBUG [RS:1;jenkins-hbase4:33211] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54b271db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:54,804 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:54,804 DEBUG [RS:1;jenkins-hbase4:33211] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52208b35, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:54,805 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:54,807 DEBUG [RS:2;jenkins-hbase4:35057] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@15ef772, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:54,807 DEBUG [RS:2;jenkins-hbase4:35057] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6fc3f6c6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:54,812 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:36389 2023-07-16 14:15:54,812 INFO [RS:0;jenkins-hbase4:36389] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:54,812 INFO [RS:0;jenkins-hbase4:36389] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:54,813 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:54,813 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:33211 2023-07-16 14:15:54,813 INFO [RS:1;jenkins-hbase4:33211] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:54,813 INFO [RS:1;jenkins-hbase4:33211] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:54,813 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:54,813 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42483,1689516954385 with isa=jenkins-hbase4.apache.org/172.31.14.131:36389, startcode=1689516954455 2023-07-16 14:15:54,813 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:54,813 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42483,1689516954385 with isa=jenkins-hbase4.apache.org/172.31.14.131:33211, startcode=1689516954495 2023-07-16 14:15:54,813 DEBUG [RS:0;jenkins-hbase4:36389] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:54,813 DEBUG [RS:1;jenkins-hbase4:33211] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:54,814 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10301173120, jitterRate=-0.04062849283218384}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:54,814 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:54,814 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:54,814 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:54,814 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:54,814 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:54,814 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:54,815 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35057 2023-07-16 14:15:54,815 INFO [RS:2;jenkins-hbase4:35057] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:54,815 INFO [RS:2;jenkins-hbase4:35057] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:54,815 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:54,815 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32909, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:54,816 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42483,1689516954385 with isa=jenkins-hbase4.apache.org/172.31.14.131:35057, startcode=1689516954537 2023-07-16 14:15:54,816 DEBUG [RS:2;jenkins-hbase4:35057] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:54,816 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:54,816 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:54,816 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:54,818 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42483] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,819 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37129, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:54,819 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:54,819 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42483] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,819 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42483] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,819 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0 2023-07-16 14:15:54,819 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33443 2023-07-16 14:15:54,819 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42119 2023-07-16 14:15:54,819 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0 2023-07-16 14:15:54,819 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33443 2023-07-16 14:15:54,820 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42119 2023-07-16 14:15:54,821 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 14:15:54,821 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:54,821 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 14:15:54,821 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 14:15:54,821 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0 2023-07-16 14:15:54,821 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 14:15:54,821 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33443 2023-07-16 14:15:54,821 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42119 2023-07-16 14:15:54,821 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 14:15:54,822 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:54,823 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 14:15:54,828 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 14:15:54,829 DEBUG [RS:1;jenkins-hbase4:33211] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,829 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33211,1689516954495] 2023-07-16 14:15:54,829 WARN [RS:1;jenkins-hbase4:33211] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:54,829 INFO [RS:1;jenkins-hbase4:33211] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:54,829 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36389,1689516954455] 2023-07-16 14:15:54,830 DEBUG [RS:2;jenkins-hbase4:35057] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,829 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,829 DEBUG [RS:0;jenkins-hbase4:36389] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,830 WARN [RS:2;jenkins-hbase4:35057] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:54,830 WARN [RS:0;jenkins-hbase4:36389] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:54,830 INFO [RS:2;jenkins-hbase4:35057] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:54,830 INFO [RS:0;jenkins-hbase4:36389] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:54,830 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,830 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35057,1689516954537] 2023-07-16 14:15:54,831 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,837 DEBUG [RS:2;jenkins-hbase4:35057] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,837 DEBUG [RS:1;jenkins-hbase4:33211] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,837 DEBUG [RS:0;jenkins-hbase4:36389] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,837 DEBUG [RS:2;jenkins-hbase4:35057] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,838 DEBUG [RS:0;jenkins-hbase4:36389] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,838 DEBUG [RS:1;jenkins-hbase4:33211] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,838 DEBUG [RS:2;jenkins-hbase4:35057] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,838 DEBUG [RS:0;jenkins-hbase4:36389] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,838 DEBUG [RS:1;jenkins-hbase4:33211] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,839 DEBUG [RS:2;jenkins-hbase4:35057] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:54,839 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:54,839 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:54,839 INFO [RS:2;jenkins-hbase4:35057] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:54,839 INFO [RS:1;jenkins-hbase4:33211] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:54,839 INFO [RS:0;jenkins-hbase4:36389] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:54,840 INFO [RS:2;jenkins-hbase4:35057] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:54,840 INFO [RS:2;jenkins-hbase4:35057] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:54,840 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,840 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:54,842 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 DEBUG [RS:2;jenkins-hbase4:35057] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,842 INFO [RS:0;jenkins-hbase4:36389] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:54,843 INFO [RS:1;jenkins-hbase4:33211] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:54,843 INFO [RS:0;jenkins-hbase4:36389] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:54,844 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,845 INFO [RS:1;jenkins-hbase4:33211] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:54,845 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:54,845 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,846 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,846 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:54,846 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,848 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,848 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,853 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,853 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,854 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:54,855 DEBUG [RS:0;jenkins-hbase4:36389] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,855 DEBUG [RS:1;jenkins-hbase4:33211] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:54,859 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,862 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,862 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,862 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,863 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,863 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,869 INFO [RS:2;jenkins-hbase4:35057] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:54,869 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35057,1689516954537-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,874 INFO [RS:0;jenkins-hbase4:36389] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:54,874 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36389,1689516954455-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,879 INFO [RS:1;jenkins-hbase4:33211] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:54,880 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33211,1689516954495-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:54,881 INFO [RS:2;jenkins-hbase4:35057] regionserver.Replication(203): jenkins-hbase4.apache.org,35057,1689516954537 started 2023-07-16 14:15:54,881 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35057,1689516954537, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35057, sessionid=0x1016e7d55fe0003 2023-07-16 14:15:54,881 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:54,881 DEBUG [RS:2;jenkins-hbase4:35057] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,881 DEBUG [RS:2;jenkins-hbase4:35057] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35057,1689516954537' 2023-07-16 14:15:54,881 DEBUG [RS:2;jenkins-hbase4:35057] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:54,881 DEBUG [RS:2;jenkins-hbase4:35057] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:54,882 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:54,882 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:54,882 DEBUG [RS:2;jenkins-hbase4:35057] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:54,882 DEBUG [RS:2;jenkins-hbase4:35057] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35057,1689516954537' 2023-07-16 14:15:54,882 DEBUG [RS:2;jenkins-hbase4:35057] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:54,882 DEBUG [RS:2;jenkins-hbase4:35057] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:54,883 DEBUG [RS:2;jenkins-hbase4:35057] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:54,883 INFO [RS:2;jenkins-hbase4:35057] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:54,883 INFO [RS:2;jenkins-hbase4:35057] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:54,886 INFO [RS:0;jenkins-hbase4:36389] regionserver.Replication(203): jenkins-hbase4.apache.org,36389,1689516954455 started 2023-07-16 14:15:54,886 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36389,1689516954455, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36389, sessionid=0x1016e7d55fe0001 2023-07-16 14:15:54,886 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:54,886 DEBUG [RS:0;jenkins-hbase4:36389] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36389,1689516954455' 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36389,1689516954455' 2023-07-16 14:15:54,887 DEBUG [RS:0;jenkins-hbase4:36389] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:54,888 DEBUG [RS:0;jenkins-hbase4:36389] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:54,888 DEBUG [RS:0;jenkins-hbase4:36389] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:54,888 INFO [RS:0;jenkins-hbase4:36389] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:54,888 INFO [RS:0;jenkins-hbase4:36389] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:54,895 INFO [RS:1;jenkins-hbase4:33211] regionserver.Replication(203): jenkins-hbase4.apache.org,33211,1689516954495 started 2023-07-16 14:15:54,895 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33211,1689516954495, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33211, sessionid=0x1016e7d55fe0002 2023-07-16 14:15:54,895 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:54,895 DEBUG [RS:1;jenkins-hbase4:33211] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,895 DEBUG [RS:1;jenkins-hbase4:33211] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33211,1689516954495' 2023-07-16 14:15:54,895 DEBUG [RS:1;jenkins-hbase4:33211] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33211,1689516954495' 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:54,896 DEBUG [RS:1;jenkins-hbase4:33211] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:54,897 DEBUG [RS:1;jenkins-hbase4:33211] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:54,897 INFO [RS:1;jenkins-hbase4:33211] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:54,897 INFO [RS:1;jenkins-hbase4:33211] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:54,979 DEBUG [jenkins-hbase4:42483] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 14:15:54,979 DEBUG [jenkins-hbase4:42483] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:54,979 DEBUG [jenkins-hbase4:42483] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:54,979 DEBUG [jenkins-hbase4:42483] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:54,979 DEBUG [jenkins-hbase4:42483] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:54,979 DEBUG [jenkins-hbase4:42483] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:54,980 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36389,1689516954455, state=OPENING 2023-07-16 14:15:54,982 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 14:15:54,983 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:54,984 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:54,984 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36389,1689516954455}] 2023-07-16 14:15:54,986 INFO [RS:2;jenkins-hbase4:35057] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35057%2C1689516954537, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,35057,1689516954537, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs, maxLogs=32 2023-07-16 14:15:54,993 INFO [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36389%2C1689516954455, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,36389,1689516954455, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs, maxLogs=32 2023-07-16 14:15:54,998 INFO [RS:1;jenkins-hbase4:33211] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33211%2C1689516954495, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,33211,1689516954495, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs, maxLogs=32 2023-07-16 14:15:55,029 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK] 2023-07-16 14:15:55,029 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK] 2023-07-16 14:15:55,031 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK] 2023-07-16 14:15:55,032 WARN [ReadOnlyZKClient-127.0.0.1:50636@0x22b8bb45] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 14:15:55,033 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:55,045 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK] 2023-07-16 14:15:55,045 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK] 2023-07-16 14:15:55,045 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK] 2023-07-16 14:15:55,047 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK] 2023-07-16 14:15:55,047 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK] 2023-07-16 14:15:55,048 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK] 2023-07-16 14:15:55,049 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48628, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:55,055 INFO [RS:2;jenkins-hbase4:35057] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,35057,1689516954537/jenkins-hbase4.apache.org%2C35057%2C1689516954537.1689516954988 2023-07-16 14:15:55,055 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36389] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:48628 deadline: 1689517015049, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,058 DEBUG [RS:2;jenkins-hbase4:35057] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK], DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK], DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK]] 2023-07-16 14:15:55,058 INFO [RS:1;jenkins-hbase4:33211] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,33211,1689516954495/jenkins-hbase4.apache.org%2C33211%2C1689516954495.1689516954998 2023-07-16 14:15:55,058 DEBUG [RS:1;jenkins-hbase4:33211] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK], DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK], DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK]] 2023-07-16 14:15:55,059 INFO [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,36389,1689516954455/jenkins-hbase4.apache.org%2C36389%2C1689516954455.1689516954993 2023-07-16 14:15:55,062 DEBUG [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK], DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK], DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK]] 2023-07-16 14:15:55,139 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,140 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:55,142 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48642, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:55,146 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 14:15:55,146 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:55,148 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36389%2C1689516954455.meta, suffix=.meta, logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,36389,1689516954455, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs, maxLogs=32 2023-07-16 14:15:55,170 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK] 2023-07-16 14:15:55,170 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK] 2023-07-16 14:15:55,170 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK] 2023-07-16 14:15:55,173 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,36389,1689516954455/jenkins-hbase4.apache.org%2C36389%2C1689516954455.meta.1689516955148.meta 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK], DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK], DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK]] 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 14:15:55,174 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 14:15:55,174 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 14:15:55,178 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 14:15:55,179 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/info 2023-07-16 14:15:55,180 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/info 2023-07-16 14:15:55,180 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 14:15:55,181 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:55,181 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 14:15:55,182 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:55,182 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/rep_barrier 2023-07-16 14:15:55,182 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 14:15:55,183 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:55,183 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 14:15:55,184 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/table 2023-07-16 14:15:55,184 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/table 2023-07-16 14:15:55,184 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 14:15:55,185 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:55,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740 2023-07-16 14:15:55,186 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740 2023-07-16 14:15:55,188 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 14:15:55,189 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 14:15:55,190 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11393329600, jitterRate=0.061086505651474}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 14:15:55,190 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 14:15:55,191 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689516955139 2023-07-16 14:15:55,195 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 14:15:55,196 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 14:15:55,196 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,36389,1689516954455, state=OPEN 2023-07-16 14:15:55,199 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 14:15:55,199 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 14:15:55,200 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 14:15:55,200 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,36389,1689516954455 in 215 msec 2023-07-16 14:15:55,202 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 14:15:55,202 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 379 msec 2023-07-16 14:15:55,203 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 484 msec 2023-07-16 14:15:55,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689516955204, completionTime=-1 2023-07-16 14:15:55,204 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 14:15:55,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 14:15:55,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 14:15:55,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689517015208 2023-07-16 14:15:55,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689517075209 2023-07-16 14:15:55,209 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-16 14:15:55,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42483,1689516954385-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42483,1689516954385-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42483,1689516954385-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:42483, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,216 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 14:15:55,217 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:55,218 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 14:15:55,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 14:15:55,219 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:55,220 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:55,222 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,223 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b empty. 2023-07-16 14:15:55,223 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,223 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 14:15:55,261 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:55,267 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 2766fae315acce3173e6c52fdc18b07b, NAME => 'hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp 2023-07-16 14:15:55,281 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:55,281 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 2766fae315acce3173e6c52fdc18b07b, disabling compactions & flushes 2023-07-16 14:15:55,281 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,281 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,281 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. after waiting 0 ms 2023-07-16 14:15:55,281 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,281 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,281 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 2766fae315acce3173e6c52fdc18b07b: 2023-07-16 14:15:55,284 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:55,285 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516955285"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516955285"}]},"ts":"1689516955285"} 2023-07-16 14:15:55,288 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:55,288 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:55,289 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516955289"}]},"ts":"1689516955289"} 2023-07-16 14:15:55,290 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 14:15:55,294 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:55,294 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:55,294 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:55,294 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:55,294 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:55,294 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2766fae315acce3173e6c52fdc18b07b, ASSIGN}] 2023-07-16 14:15:55,296 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=2766fae315acce3173e6c52fdc18b07b, ASSIGN 2023-07-16 14:15:55,297 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=2766fae315acce3173e6c52fdc18b07b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36389,1689516954455; forceNewPlan=false, retain=false 2023-07-16 14:15:55,358 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:55,361 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 14:15:55,363 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:55,364 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:55,366 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,367 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac empty. 2023-07-16 14:15:55,367 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,367 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 14:15:55,384 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:55,386 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => bcbb66dfa84ee142cb7fccaeec781eac, NAME => 'hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp 2023-07-16 14:15:55,399 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:55,399 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing bcbb66dfa84ee142cb7fccaeec781eac, disabling compactions & flushes 2023-07-16 14:15:55,399 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,399 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,399 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. after waiting 0 ms 2023-07-16 14:15:55,399 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,399 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,399 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for bcbb66dfa84ee142cb7fccaeec781eac: 2023-07-16 14:15:55,402 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:55,403 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516955403"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516955403"}]},"ts":"1689516955403"} 2023-07-16 14:15:55,405 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:55,406 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:55,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516955406"}]},"ts":"1689516955406"} 2023-07-16 14:15:55,410 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 14:15:55,414 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:55,414 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:55,414 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:55,414 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:55,414 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:55,415 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bcbb66dfa84ee142cb7fccaeec781eac, ASSIGN}] 2023-07-16 14:15:55,416 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bcbb66dfa84ee142cb7fccaeec781eac, ASSIGN 2023-07-16 14:15:55,417 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=bcbb66dfa84ee142cb7fccaeec781eac, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33211,1689516954495; forceNewPlan=false, retain=false 2023-07-16 14:15:55,417 INFO [jenkins-hbase4:42483] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 14:15:55,421 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=bcbb66dfa84ee142cb7fccaeec781eac, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,421 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516955421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516955421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516955421"}]},"ts":"1689516955421"} 2023-07-16 14:15:55,421 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2766fae315acce3173e6c52fdc18b07b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,422 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516955421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516955421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516955421"}]},"ts":"1689516955421"} 2023-07-16 14:15:55,424 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure bcbb66dfa84ee142cb7fccaeec781eac, server=jenkins-hbase4.apache.org,33211,1689516954495}] 2023-07-16 14:15:55,427 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=5, state=RUNNABLE; OpenRegionProcedure 2766fae315acce3173e6c52fdc18b07b, server=jenkins-hbase4.apache.org,36389,1689516954455}] 2023-07-16 14:15:55,578 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,578 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 14:15:55,580 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57590, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 14:15:55,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bcbb66dfa84ee142cb7fccaeec781eac, NAME => 'hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:55,584 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2766fae315acce3173e6c52fdc18b07b, NAME => 'hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:55,584 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. service=MultiRowMutationService 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,585 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,585 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,587 INFO [StoreOpener-2766fae315acce3173e6c52fdc18b07b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,587 INFO [StoreOpener-bcbb66dfa84ee142cb7fccaeec781eac-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,589 DEBUG [StoreOpener-bcbb66dfa84ee142cb7fccaeec781eac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/m 2023-07-16 14:15:55,589 DEBUG [StoreOpener-bcbb66dfa84ee142cb7fccaeec781eac-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/m 2023-07-16 14:15:55,589 DEBUG [StoreOpener-2766fae315acce3173e6c52fdc18b07b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/info 2023-07-16 14:15:55,589 DEBUG [StoreOpener-2766fae315acce3173e6c52fdc18b07b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/info 2023-07-16 14:15:55,589 INFO [StoreOpener-bcbb66dfa84ee142cb7fccaeec781eac-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bcbb66dfa84ee142cb7fccaeec781eac columnFamilyName m 2023-07-16 14:15:55,589 INFO [StoreOpener-2766fae315acce3173e6c52fdc18b07b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2766fae315acce3173e6c52fdc18b07b columnFamilyName info 2023-07-16 14:15:55,590 INFO [StoreOpener-bcbb66dfa84ee142cb7fccaeec781eac-1] regionserver.HStore(310): Store=bcbb66dfa84ee142cb7fccaeec781eac/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:55,590 INFO [StoreOpener-2766fae315acce3173e6c52fdc18b07b-1] regionserver.HStore(310): Store=2766fae315acce3173e6c52fdc18b07b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:55,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,591 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:55,594 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:55,596 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:55,597 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bcbb66dfa84ee142cb7fccaeec781eac; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@5c492a86, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:55,597 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bcbb66dfa84ee142cb7fccaeec781eac: 2023-07-16 14:15:55,597 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:55,598 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2766fae315acce3173e6c52fdc18b07b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11781052640, jitterRate=0.09719602763652802}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:55,598 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2766fae315acce3173e6c52fdc18b07b: 2023-07-16 14:15:55,598 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac., pid=8, masterSystemTime=1689516955578 2023-07-16 14:15:55,601 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b., pid=9, masterSystemTime=1689516955581 2023-07-16 14:15:55,602 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,603 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:55,603 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=bcbb66dfa84ee142cb7fccaeec781eac, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,603 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689516955603"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516955603"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516955603"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516955603"}]},"ts":"1689516955603"} 2023-07-16 14:15:55,604 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,604 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:55,604 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=2766fae315acce3173e6c52fdc18b07b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,604 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689516955604"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516955604"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516955604"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516955604"}]},"ts":"1689516955604"} 2023-07-16 14:15:55,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-16 14:15:55,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=5 2023-07-16 14:15:55,612 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure bcbb66dfa84ee142cb7fccaeec781eac, server=jenkins-hbase4.apache.org,33211,1689516954495 in 182 msec 2023-07-16 14:15:55,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=5, state=SUCCESS; OpenRegionProcedure 2766fae315acce3173e6c52fdc18b07b, server=jenkins-hbase4.apache.org,36389,1689516954455 in 179 msec 2023-07-16 14:15:55,614 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-16 14:15:55,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-16 14:15:55,614 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=bcbb66dfa84ee142cb7fccaeec781eac, ASSIGN in 197 msec 2023-07-16 14:15:55,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=2766fae315acce3173e6c52fdc18b07b, ASSIGN in 318 msec 2023-07-16 14:15:55,614 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:55,614 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:55,615 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516955615"}]},"ts":"1689516955615"} 2023-07-16 14:15:55,615 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516955615"}]},"ts":"1689516955615"} 2023-07-16 14:15:55,617 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 14:15:55,618 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 14:15:55,619 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:55,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 14:15:55,621 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:55,621 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 403 msec 2023-07-16 14:15:55,622 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:55,622 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:55,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 263 msec 2023-07-16 14:15:55,626 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 14:15:55,634 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:55,636 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-16 14:15:55,638 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 14:15:55,643 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:55,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-16 14:15:55,653 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 14:15:55,655 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 14:15:55,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.064sec 2023-07-16 14:15:55,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 14:15:55,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 14:15:55,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 14:15:55,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42483,1689516954385-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 14:15:55,655 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42483,1689516954385-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 14:15:55,656 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 14:15:55,668 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:55,671 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57606, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:55,675 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 14:15:55,675 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 14:15:55,677 DEBUG [Listener at localhost/37985] zookeeper.ReadOnlyZKClient(139): Connect 0x3720103e to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:55,690 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:55,690 DEBUG [Listener at localhost/37985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@10a31e76, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:55,690 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:55,692 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:55,694 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 14:15:55,695 DEBUG [hconnection-0x7d87758a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:55,698 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48648, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:55,700 INFO [Listener at localhost/37985] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:55,700 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:55,704 DEBUG [Listener at localhost/37985] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 14:15:55,706 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35312, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 14:15:55,710 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 14:15:55,710 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:55,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 14:15:55,715 DEBUG [Listener at localhost/37985] zookeeper.ReadOnlyZKClient(139): Connect 0x46a2470b to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:55,728 DEBUG [Listener at localhost/37985] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cf5170, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:55,728 INFO [Listener at localhost/37985] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-07-16 14:15:55,733 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:55,734 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016e7d55fe000a connected 2023-07-16 14:15:55,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:55,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:55,740 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 14:15:55,752 INFO [Listener at localhost/37985] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 14:15:55,752 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:55,752 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:55,752 INFO [Listener at localhost/37985] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 14:15:55,752 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 14:15:55,753 INFO [Listener at localhost/37985] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 14:15:55,753 INFO [Listener at localhost/37985] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 14:15:55,753 INFO [Listener at localhost/37985] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42869 2023-07-16 14:15:55,754 INFO [Listener at localhost/37985] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 14:15:55,755 DEBUG [Listener at localhost/37985] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 14:15:55,755 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:55,756 INFO [Listener at localhost/37985] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 14:15:55,757 INFO [Listener at localhost/37985] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42869 connecting to ZooKeeper ensemble=127.0.0.1:50636 2023-07-16 14:15:55,760 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:428690x0, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 14:15:55,761 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(162): regionserver:428690x0, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 14:15:55,762 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42869-0x1016e7d55fe000b connected 2023-07-16 14:15:55,763 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(162): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 14:15:55,763 DEBUG [Listener at localhost/37985] zookeeper.ZKUtil(164): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 14:15:55,763 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42869 2023-07-16 14:15:55,764 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42869 2023-07-16 14:15:55,764 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42869 2023-07-16 14:15:55,764 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42869 2023-07-16 14:15:55,764 DEBUG [Listener at localhost/37985] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42869 2023-07-16 14:15:55,766 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 14:15:55,766 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 14:15:55,766 INFO [Listener at localhost/37985] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 14:15:55,767 INFO [Listener at localhost/37985] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 14:15:55,767 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 14:15:55,767 INFO [Listener at localhost/37985] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 14:15:55,767 INFO [Listener at localhost/37985] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 14:15:55,767 INFO [Listener at localhost/37985] http.HttpServer(1146): Jetty bound to port 33605 2023-07-16 14:15:55,768 INFO [Listener at localhost/37985] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 14:15:55,769 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:55,769 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@478f422{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,AVAILABLE} 2023-07-16 14:15:55,769 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:55,769 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5da1de58{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,AVAILABLE} 2023-07-16 14:15:55,774 INFO [Listener at localhost/37985] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 14:15:55,775 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 14:15:55,775 INFO [Listener at localhost/37985] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 14:15:55,775 INFO [Listener at localhost/37985] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 14:15:55,776 INFO [Listener at localhost/37985] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 14:15:55,776 INFO [Listener at localhost/37985] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@a9db2d1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver/,AVAILABLE}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:55,778 INFO [Listener at localhost/37985] server.AbstractConnector(333): Started ServerConnector@24e74273{HTTP/1.1, (http/1.1)}{0.0.0.0:33605} 2023-07-16 14:15:55,778 INFO [Listener at localhost/37985] server.Server(415): Started @43367ms 2023-07-16 14:15:55,780 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(951): ClusterId : 06c18206-c009-4e02-8c4f-4260c4018c5a 2023-07-16 14:15:55,781 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 14:15:55,783 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 14:15:55,783 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 14:15:55,785 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 14:15:55,788 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ReadOnlyZKClient(139): Connect 0x69d39f45 to 127.0.0.1:50636 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 14:15:55,794 DEBUG [RS:3;jenkins-hbase4:42869] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@67146c0f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 14:15:55,794 DEBUG [RS:3;jenkins-hbase4:42869] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@632bf435, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:55,802 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42869 2023-07-16 14:15:55,802 INFO [RS:3;jenkins-hbase4:42869] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 14:15:55,802 INFO [RS:3;jenkins-hbase4:42869] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 14:15:55,802 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 14:15:55,803 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,42483,1689516954385 with isa=jenkins-hbase4.apache.org/172.31.14.131:42869, startcode=1689516955752 2023-07-16 14:15:55,803 DEBUG [RS:3;jenkins-hbase4:42869] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 14:15:55,805 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49527, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 14:15:55,805 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42483] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,805 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 14:15:55,806 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0 2023-07-16 14:15:55,806 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33443 2023-07-16 14:15:55,806 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42119 2023-07-16 14:15:55,812 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:55,812 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:55,812 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:55,812 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:55,812 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:55,812 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ZKUtil(162): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,812 WARN [RS:3;jenkins-hbase4:42869] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 14:15:55,812 INFO [RS:3;jenkins-hbase4:42869] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 14:15:55,812 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42869,1689516955752] 2023-07-16 14:15:55,812 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,812 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 14:15:55,813 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,813 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,816 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 14:15:55,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,817 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:55,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:55,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:55,818 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,822 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ZKUtil(162): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,823 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ZKUtil(162): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:55,823 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ZKUtil(162): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:55,823 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ZKUtil(162): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:55,824 DEBUG [RS:3;jenkins-hbase4:42869] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 14:15:55,824 INFO [RS:3;jenkins-hbase4:42869] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 14:15:55,826 INFO [RS:3;jenkins-hbase4:42869] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 14:15:55,827 INFO [RS:3;jenkins-hbase4:42869] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 14:15:55,827 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,827 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 14:15:55,829 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,830 DEBUG [RS:3;jenkins-hbase4:42869] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 14:15:55,834 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,835 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,835 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,848 INFO [RS:3;jenkins-hbase4:42869] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 14:15:55,848 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42869,1689516955752-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 14:15:55,862 INFO [RS:3;jenkins-hbase4:42869] regionserver.Replication(203): jenkins-hbase4.apache.org,42869,1689516955752 started 2023-07-16 14:15:55,862 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42869,1689516955752, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42869, sessionid=0x1016e7d55fe000b 2023-07-16 14:15:55,863 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 14:15:55,863 DEBUG [RS:3;jenkins-hbase4:42869] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,863 DEBUG [RS:3;jenkins-hbase4:42869] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42869,1689516955752' 2023-07-16 14:15:55,863 DEBUG [RS:3;jenkins-hbase4:42869] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 14:15:55,863 DEBUG [RS:3;jenkins-hbase4:42869] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 14:15:55,864 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 14:15:55,864 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 14:15:55,864 DEBUG [RS:3;jenkins-hbase4:42869] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:55,864 DEBUG [RS:3;jenkins-hbase4:42869] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42869,1689516955752' 2023-07-16 14:15:55,864 DEBUG [RS:3;jenkins-hbase4:42869] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 14:15:55,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:55,865 DEBUG [RS:3;jenkins-hbase4:42869] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 14:15:55,865 DEBUG [RS:3;jenkins-hbase4:42869] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 14:15:55,865 INFO [RS:3;jenkins-hbase4:42869] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 14:15:55,865 INFO [RS:3;jenkins-hbase4:42869] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 14:15:55,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:55,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:55,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:55,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:55,882 DEBUG [hconnection-0x1ecadc90-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:55,884 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48660, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:55,888 DEBUG [hconnection-0x1ecadc90-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 14:15:55,890 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57608, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 14:15:55,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:55,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:55,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:55,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:55,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35312 deadline: 1689518155895, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:55,896 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:55,897 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:55,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:55,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:55,898 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:55,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:55,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:55,953 INFO [Listener at localhost/37985] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=567 (was 515) Potentially hanging thread: Listener at localhost/37985-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:35057Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9520519-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data2/current/BP-14668270-172.31.14.131-1689516953602 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2278 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:33211 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55919@0x52ea318a-SendThread(127.0.0.1:55919) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x22b8bb45 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2077147454_17 at /127.0.0.1:33620 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data3/current/BP-14668270-172.31.14.131-1689516953602 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2250 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:40166 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:33443 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@7e68d44a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:42869-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1334939611@qtp-1324500903-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2077147454_17 at /127.0.0.1:40092 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1ecadc90-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@2fe94bae java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33357-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1825508135-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 36335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x69d39f45 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a859ac9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33357-SendThread(127.0.0.1:55919) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:40146 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33443 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33443 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33443 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1825508135-2221 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33211Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2068356457_17 at /127.0.0.1:33642 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2077147454_17 at /127.0.0.1:40110 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1825508135-2220 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55919@0x52ea318a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0-prefix:jenkins-hbase4.apache.org,36389,1689516954455 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x69d39f45-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:43571 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x22b8bb45-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp2087659022-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x46a2470b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data4/current/BP-14668270-172.31.14.131-1689516953602 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2246 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2252 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9520519-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:33443 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2217 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2216-acceptor-0@3d4078ed-ServerConnector@22366487{HTTP/1.1, (http/1.1)}{0.0.0.0:42119} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:55919@0x52ea318a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@212b1f9f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:42483 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-49346ed9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp722503952-2281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:43571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp211177675-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516954742 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x445c719d-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2087659022-2586-acceptor-0@4685cdb9-ServerConnector@24e74273{HTTP/1.1, (http/1.1)}{0.0.0.0:33605} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9520519-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/37985.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp1825508135-2218 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-52b39fb9-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2219 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33443 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:42869Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp996131713-2321-acceptor-0@1545801a-ServerConnector@35983a9a{HTTP/1.1, (http/1.1)}{0.0.0.0:39893} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:0;jenkins-hbase4:36389-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33443 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x63898e27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 45629 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data1/current/BP-14668270-172.31.14.131-1689516953602 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:43571 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp211177675-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2585 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 361516287@qtp-840750738-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46309 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x6d664fe7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@3cc0e5fa java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:43571 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x7d87758a-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2087659022-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-549-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35057 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9520519-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:43571 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33443 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0-prefix:jenkins-hbase4.apache.org,36389,1689516954455.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp996131713-2322 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37985 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33443 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 1 on default port 45629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x22b8bb45-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 36335 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp211177675-2307-acceptor-0@147e08f4-ServerConnector@b2fe258{HTTP/1.1, (http/1.1)}{0.0.0.0:36451} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@f725f86 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x69d39f45-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp2090151062-2248 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2282 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp996131713-2324 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33443 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:33443 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9520519-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp722503952-2277-acceptor-0@6f854efb-ServerConnector@66049cc4{HTTP/1.1, (http/1.1)}{0.0.0.0:44877} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2276 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0-prefix:jenkins-hbase4.apache.org,35057,1689516954537 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 45629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp211177675-2306 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x63898e27-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x445c719d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-554-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:3;jenkins-hbase4:42869 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: globalEventExecutor-1-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) io.netty.util.concurrent.GlobalEventExecutor.takeTask(GlobalEventExecutor.java:95) io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:239) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:33664 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-69eb0e1f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@4c0bc3e1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33443 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp211177675-2308 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp211177675-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data5/current/BP-14668270-172.31.14.131-1689516953602 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-5daf6e09-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp211177675-2312 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-544-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp722503952-2279 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@70c264 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x6d664fe7-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData-prefix:jenkins-hbase4.apache.org,42483,1689516954385 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@236e75b5 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1994651802@qtp-394908169-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38049 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: hconnection-0x9520519-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2587 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x3720103e-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 386992516@qtp-296849451-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1114202808_17 at /127.0.0.1:45302 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(685018696) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp2090151062-2251 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516954742 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp722503952-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 897410008@qtp-840750738-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@11f21f93 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-443e44f8-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 45629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2068356457_17 at /127.0.0.1:40136 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 36335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2090151062-2249 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:45318 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1114202808_17 at /127.0.0.1:33658 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x63898e27-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 37985 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x3720103e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1114202808_17 at /127.0.0.1:40158 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1114202808_17 at /127.0.0.1:33596 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33443 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 37985 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@408306a7[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp996131713-2323 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x46a2470b-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x9520519-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35057-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x9520519-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp996131713-2318 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp996131713-2317 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 37985 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/37985-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:33650 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:33443 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: qtp722503952-2280 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:50636 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x46a2470b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp211177675-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1402356983) connection to localhost/127.0.0.1:43571 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:43571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 45629 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40717,1689516949514 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp996131713-2319 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 36335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-563-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0-prefix:jenkins-hbase4.apache.org,33211,1689516954495 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x3720103e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@179ac26f[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x445c719d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/7563763.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1825508135-2215 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-2077147454_17 at /127.0.0.1:45246 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37985 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 733763639@qtp-296849451-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35165 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:36389 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 33443 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/37985-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 3 on default port 36335 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/37985-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:36389Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@5bef253f java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 37985 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x1ecadc90-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:33211-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33443 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:45294 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@4effafd7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42483 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@84f8388 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@727af871 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2090151062-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:43571 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:50636): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2068356457_17 at /127.0.0.1:45280 [Receiving block BP-14668270-172.31.14.131-1689516953602:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/37985.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35057 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50636@0x6d664fe7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/37985-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/37985-SendThread(127.0.0.1:50636) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 1797914875@qtp-1324500903-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46083 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2090151062-2247-acceptor-0@169ed059-ServerConnector@30ce9388{HTTP/1.1, (http/1.1)}{0.0.0.0:36257} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1492822819@qtp-394908169-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp996131713-2320 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1715946527.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42869 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1813552096_17 at /127.0.0.1:45314 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2087659022-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,42483,1689516954385 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-14668270-172.31.14.131-1689516953602:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data6/current/BP-14668270-172.31.14.131-1689516953602 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=850 (was 797) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 468), ProcessCount=174 (was 175), AvailableMemoryMB=2163 (was 2311) 2023-07-16 14:15:55,957 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-16 14:15:55,968 INFO [RS:3;jenkins-hbase4:42869] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42869%2C1689516955752, suffix=, logDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,42869,1689516955752, archiveDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs, maxLogs=32 2023-07-16 14:15:55,981 INFO [Listener at localhost/37985] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=567, OpenFileDescriptor=850, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=174, AvailableMemoryMB=2163 2023-07-16 14:15:55,981 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=567 is superior to 500 2023-07-16 14:15:55,981 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-16 14:15:55,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:55,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:55,992 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK] 2023-07-16 14:15:55,994 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK] 2023-07-16 14:15:55,996 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK] 2023-07-16 14:15:55,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:55,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:55,997 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:55,999 INFO [RS:3;jenkins-hbase4:42869] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/WALs/jenkins-hbase4.apache.org,42869,1689516955752/jenkins-hbase4.apache.org%2C42869%2C1689516955752.1689516955968 2023-07-16 14:15:55,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:56,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:56,000 DEBUG [RS:3;jenkins-hbase4:42869] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36819,DS-673f9601-f8c8-4f58-af9b-5190a1e7bac2,DISK], DatanodeInfoWithStorage[127.0.0.1:43309,DS-f8337560-d7d2-4d5e-8063-b5b45fc7594a,DISK], DatanodeInfoWithStorage[127.0.0.1:40721,DS-89e45247-2462-4760-bb6a-8c1af26f5d0f,DISK]] 2023-07-16 14:15:56,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:56,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:56,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:56,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:56,010 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:56,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:56,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:56,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:56,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:56,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:56,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:56,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:56,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:56,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:56,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35312 deadline: 1689518156021, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:56,022 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:56,023 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:56,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:56,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:56,024 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:56,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:56,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:56,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:56,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 14:15:56,029 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:56,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-16 14:15:56,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 14:15:56,031 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:56,031 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:56,032 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:56,034 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 14:15:56,035 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,036 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 empty. 2023-07-16 14:15:56,036 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,036 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 14:15:56,053 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-16 14:15:56,053 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-16 14:15:56,055 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1e1b78999ec7fa8c4e6f9487af390c29, NAME => 't1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp 2023-07-16 14:15:56,083 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:56,083 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 1e1b78999ec7fa8c4e6f9487af390c29, disabling compactions & flushes 2023-07-16 14:15:56,084 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. after waiting 0 ms 2023-07-16 14:15:56,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,084 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,084 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 1e1b78999ec7fa8c4e6f9487af390c29: 2023-07-16 14:15:56,086 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 14:15:56,088 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516956087"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516956087"}]},"ts":"1689516956087"} 2023-07-16 14:15:56,089 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 14:15:56,092 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 14:15:56,092 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516956092"}]},"ts":"1689516956092"} 2023-07-16 14:15:56,093 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-16 14:15:56,098 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 14:15:56,098 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 14:15:56,098 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 14:15:56,098 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 14:15:56,098 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 14:15:56,098 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 14:15:56,099 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, ASSIGN}] 2023-07-16 14:15:56,100 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, ASSIGN 2023-07-16 14:15:56,114 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36389,1689516954455; forceNewPlan=false, retain=false 2023-07-16 14:15:56,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 14:15:56,264 INFO [jenkins-hbase4:42483] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 14:15:56,266 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1e1b78999ec7fa8c4e6f9487af390c29, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:56,266 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516956266"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516956266"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516956266"}]},"ts":"1689516956266"} 2023-07-16 14:15:56,268 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 1e1b78999ec7fa8c4e6f9487af390c29, server=jenkins-hbase4.apache.org,36389,1689516954455}] 2023-07-16 14:15:56,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 14:15:56,423 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1e1b78999ec7fa8c4e6f9487af390c29, NAME => 't1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.', STARTKEY => '', ENDKEY => ''} 2023-07-16 14:15:56,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 14:15:56,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,425 INFO [StoreOpener-1e1b78999ec7fa8c4e6f9487af390c29-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,426 DEBUG [StoreOpener-1e1b78999ec7fa8c4e6f9487af390c29-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/cf1 2023-07-16 14:15:56,426 DEBUG [StoreOpener-1e1b78999ec7fa8c4e6f9487af390c29-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/cf1 2023-07-16 14:15:56,426 INFO [StoreOpener-1e1b78999ec7fa8c4e6f9487af390c29-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1e1b78999ec7fa8c4e6f9487af390c29 columnFamilyName cf1 2023-07-16 14:15:56,427 INFO [StoreOpener-1e1b78999ec7fa8c4e6f9487af390c29-1] regionserver.HStore(310): Store=1e1b78999ec7fa8c4e6f9487af390c29/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 14:15:56,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,430 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 14:15:56,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1e1b78999ec7fa8c4e6f9487af390c29; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10040691200, jitterRate=-0.06488776206970215}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 14:15:56,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1e1b78999ec7fa8c4e6f9487af390c29: 2023-07-16 14:15:56,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29., pid=14, masterSystemTime=1689516956419 2023-07-16 14:15:56,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,435 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1e1b78999ec7fa8c4e6f9487af390c29, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:56,435 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516956435"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689516956435"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689516956435"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689516956435"}]},"ts":"1689516956435"} 2023-07-16 14:15:56,437 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-16 14:15:56,437 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 1e1b78999ec7fa8c4e6f9487af390c29, server=jenkins-hbase4.apache.org,36389,1689516954455 in 168 msec 2023-07-16 14:15:56,439 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 14:15:56,439 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, ASSIGN in 338 msec 2023-07-16 14:15:56,439 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 14:15:56,440 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516956439"}]},"ts":"1689516956439"} 2023-07-16 14:15:56,441 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-16 14:15:56,442 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 14:15:56,444 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 415 msec 2023-07-16 14:15:56,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 14:15:56,642 INFO [Listener at localhost/37985] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-16 14:15:56,642 DEBUG [Listener at localhost/37985] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-16 14:15:56,642 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:56,644 INFO [Listener at localhost/37985] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-16 14:15:56,644 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:56,645 INFO [Listener at localhost/37985] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-16 14:15:56,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 14:15:56,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 14:15:56,649 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 14:15:56,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-16 14:15:56,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 353 connection: 172.31.14.131:35312 deadline: 1689517016646, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-16 14:15:56,651 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:56,653 INFO [PEWorker-5] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-16 14:15:56,752 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:56,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:56,753 INFO [Listener at localhost/37985] client.HBaseAdmin$15(890): Started disable of t1 2023-07-16 14:15:56,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-16 14:15:56,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-16 14:15:56,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 14:15:56,757 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516956757"}]},"ts":"1689516956757"} 2023-07-16 14:15:56,758 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-16 14:15:56,760 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-16 14:15:56,760 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, UNASSIGN}] 2023-07-16 14:15:56,761 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, UNASSIGN 2023-07-16 14:15:56,761 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1e1b78999ec7fa8c4e6f9487af390c29, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:56,761 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516956761"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689516956761"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689516956761"}]},"ts":"1689516956761"} 2023-07-16 14:15:56,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 1e1b78999ec7fa8c4e6f9487af390c29, server=jenkins-hbase4.apache.org,36389,1689516954455}] 2023-07-16 14:15:56,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 14:15:56,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1e1b78999ec7fa8c4e6f9487af390c29, disabling compactions & flushes 2023-07-16 14:15:56,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. after waiting 0 ms 2023-07-16 14:15:56,915 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 14:15:56,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29. 2023-07-16 14:15:56,920 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1e1b78999ec7fa8c4e6f9487af390c29: 2023-07-16 14:15:56,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:56,922 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1e1b78999ec7fa8c4e6f9487af390c29, regionState=CLOSED 2023-07-16 14:15:56,922 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689516956922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689516956922"}]},"ts":"1689516956922"} 2023-07-16 14:15:56,925 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 14:15:56,925 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 1e1b78999ec7fa8c4e6f9487af390c29, server=jenkins-hbase4.apache.org,36389,1689516954455 in 162 msec 2023-07-16 14:15:56,927 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-16 14:15:56,927 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=1e1b78999ec7fa8c4e6f9487af390c29, UNASSIGN in 165 msec 2023-07-16 14:15:56,928 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689516956927"}]},"ts":"1689516956927"} 2023-07-16 14:15:56,929 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-16 14:15:56,930 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-16 14:15:56,934 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 177 msec 2023-07-16 14:15:57,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 14:15:57,060 INFO [Listener at localhost/37985] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-16 14:15:57,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-16 14:15:57,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-16 14:15:57,062 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 14:15:57,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-16 14:15:57,063 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-16 14:15:57,064 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,067 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:57,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 14:15:57,068 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/cf1, FileablePath, hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/recovered.edits] 2023-07-16 14:15:57,073 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/recovered.edits/4.seqid to hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/archive/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29/recovered.edits/4.seqid 2023-07-16 14:15:57,074 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/.tmp/data/default/t1/1e1b78999ec7fa8c4e6f9487af390c29 2023-07-16 14:15:57,074 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 14:15:57,076 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-16 14:15:57,078 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-16 14:15:57,079 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-16 14:15:57,080 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-16 14:15:57,080 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-16 14:15:57,080 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689516957080"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:57,082 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 14:15:57,082 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1e1b78999ec7fa8c4e6f9487af390c29, NAME => 't1,,1689516956026.1e1b78999ec7fa8c4e6f9487af390c29.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 14:15:57,082 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-16 14:15:57,082 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689516957082"}]},"ts":"9223372036854775807"} 2023-07-16 14:15:57,084 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-16 14:15:57,086 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 14:15:57,087 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 26 msec 2023-07-16 14:15:57,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 14:15:57,168 INFO [Listener at localhost/37985] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-16 14:15:57,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,187 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:35312 deadline: 1689518157197, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,198 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,201 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,202 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,222 INFO [Listener at localhost/37985] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=578 (was 567) - Thread LEAK? -, OpenFileDescriptor=854 (was 850) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=174 (was 174), AvailableMemoryMB=2190 (was 2163) - AvailableMemoryMB LEAK? - 2023-07-16 14:15:57,222 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-16 14:15:57,241 INFO [Listener at localhost/37985] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=578, OpenFileDescriptor=854, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=174, AvailableMemoryMB=2190 2023-07-16 14:15:57,241 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-16 14:15:57,241 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-16 14:15:57,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,254 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35312 deadline: 1689518157264, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,265 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,267 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,268 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 14:15:57,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:57,271 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-16 14:15:57,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 14:15:57,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 14:15:57,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,283 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,290 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35312 deadline: 1689518157301, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,301 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,303 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,304 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,304 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,327 INFO [Listener at localhost/37985] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=580 (was 578) - Thread LEAK? -, OpenFileDescriptor=854 (was 854), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=174 (was 174), AvailableMemoryMB=2188 (was 2190) 2023-07-16 14:15:57,327 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=580 is superior to 500 2023-07-16 14:15:57,349 INFO [Listener at localhost/37985] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=580, OpenFileDescriptor=854, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=174, AvailableMemoryMB=2188 2023-07-16 14:15:57,349 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=580 is superior to 500 2023-07-16 14:15:57,350 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-16 14:15:57,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,367 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35312 deadline: 1689518157383, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,383 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,385 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,386 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,391 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,403 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35312 deadline: 1689518157413, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,414 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,416 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,417 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,441 INFO [Listener at localhost/37985] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=580 (was 580), OpenFileDescriptor=853 (was 854), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=174 (was 174), AvailableMemoryMB=2189 (was 2188) - AvailableMemoryMB LEAK? - 2023-07-16 14:15:57,442 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=580 is superior to 500 2023-07-16 14:15:57,460 INFO [Listener at localhost/37985] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=578, OpenFileDescriptor=851, MaxFileDescriptor=60000, SystemLoadAverage=426, ProcessCount=174, AvailableMemoryMB=2189 2023-07-16 14:15:57,460 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-16 14:15:57,460 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-16 14:15:57,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,476 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,478 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,483 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35312 deadline: 1689518157485, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,485 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,487 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,488 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,489 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-16 14:15:57,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-16 14:15:57,492 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 14:15:57,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 14:15:57,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,500 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 14:15:57,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 14:15:57,511 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:57,515 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-16 14:15:57,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 14:15:57,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 14:15:57,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:35312 deadline: 1689518157607, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-16 14:15:57,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 14:15:57,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:57,629 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 14:15:57,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-16 14:15:57,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 14:15:57,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-16 14:15:57,731 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 14:15:57,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 14:15:57,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 14:15:57,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-16 14:15:57,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,747 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,749 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,749 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 14:15:57,751 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,752 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 14:15:57,752 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 14:15:57,752 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,754 INFO [PEWorker-5] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 14:15:57,755 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-16 14:15:57,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 14:15:57,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 14:15:57,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 14:15:57,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 14:15:57,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:35312 deadline: 1689517017862, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-16 14:15:57,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-16 14:15:57,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 14:15:57,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 14:15:57,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 14:15:57,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 14:15:57,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 14:15:57,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 14:15:57,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 14:15:57,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 14:15:57,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 14:15:57,883 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 14:15:57,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 14:15:57,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 14:15:57,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 14:15:57,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 14:15:57,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 14:15:57,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42483] to rsgroup master 2023-07-16 14:15:57,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 14:15:57,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:35312 deadline: 1689518157892, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. 2023-07-16 14:15:57,893 WARN [Listener at localhost/37985] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:42483 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 14:15:57,895 INFO [Listener at localhost/37985] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 14:15:57,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 14:15:57,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 14:15:57,897 INFO [Listener at localhost/37985] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33211, jenkins-hbase4.apache.org:35057, jenkins-hbase4.apache.org:36389, jenkins-hbase4.apache.org:42869], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 14:15:57,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 14:15:57,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42483] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 14:15:57,917 INFO [Listener at localhost/37985] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=578 (was 578), OpenFileDescriptor=851 (was 851), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=426 (was 426), ProcessCount=174 (was 174), AvailableMemoryMB=2182 (was 2189) 2023-07-16 14:15:57,917 WARN [Listener at localhost/37985] hbase.ResourceChecker(130): Thread=578 is superior to 500 2023-07-16 14:15:57,917 INFO [Listener at localhost/37985] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 14:15:57,917 INFO [Listener at localhost/37985] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 14:15:57,917 DEBUG [Listener at localhost/37985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3720103e to 127.0.0.1:50636 2023-07-16 14:15:57,917 DEBUG [Listener at localhost/37985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,917 DEBUG [Listener at localhost/37985] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 14:15:57,917 DEBUG [Listener at localhost/37985] util.JVMClusterUtil(257): Found active master hash=2025855394, stopped=false 2023-07-16 14:15:57,917 DEBUG [Listener at localhost/37985] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 14:15:57,917 DEBUG [Listener at localhost/37985] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 14:15:57,918 INFO [Listener at localhost/37985] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:57,919 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:57,919 INFO [Listener at localhost/37985] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 14:15:57,919 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:57,920 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:57,919 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:57,919 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:57,921 DEBUG [Listener at localhost/37985] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x22b8bb45 to 127.0.0.1:50636 2023-07-16 14:15:57,921 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:57,922 DEBUG [Listener at localhost/37985] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,922 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:57,922 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:57,922 INFO [Listener at localhost/37985] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36389,1689516954455' ***** 2023-07-16 14:15:57,922 INFO [Listener at localhost/37985] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:57,922 INFO [Listener at localhost/37985] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33211,1689516954495' ***** 2023-07-16 14:15:57,922 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:57,922 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:57,922 INFO [Listener at localhost/37985] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:57,922 INFO [Listener at localhost/37985] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35057,1689516954537' ***** 2023-07-16 14:15:57,922 INFO [Listener at localhost/37985] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:57,924 INFO [Listener at localhost/37985] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42869,1689516955752' ***** 2023-07-16 14:15:57,924 INFO [Listener at localhost/37985] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 14:15:57,924 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:57,924 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:57,924 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:57,924 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 14:15:57,928 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 14:15:57,928 INFO [RS:0;jenkins-hbase4:36389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2ab355a6{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:57,928 INFO [RS:0;jenkins-hbase4:36389] server.AbstractConnector(383): Stopped ServerConnector@30ce9388{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:57,928 INFO [RS:0;jenkins-hbase4:36389] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:57,931 INFO [RS:0;jenkins-hbase4:36389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@39bfefc6{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:57,932 INFO [RS:1;jenkins-hbase4:33211] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@24fd7fc7{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:57,932 INFO [RS:2;jenkins-hbase4:35057] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@164f60c8{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:57,932 INFO [RS:0;jenkins-hbase4:36389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@666a8c86{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:57,932 INFO [RS:3;jenkins-hbase4:42869] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@a9db2d1{regionserver,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/regionserver} 2023-07-16 14:15:57,933 INFO [RS:1;jenkins-hbase4:33211] server.AbstractConnector(383): Stopped ServerConnector@66049cc4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:57,933 INFO [RS:2;jenkins-hbase4:35057] server.AbstractConnector(383): Stopped ServerConnector@b2fe258{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:57,933 INFO [RS:1;jenkins-hbase4:33211] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:57,933 INFO [RS:2;jenkins-hbase4:35057] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:57,933 INFO [RS:3;jenkins-hbase4:42869] server.AbstractConnector(383): Stopped ServerConnector@24e74273{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:57,934 INFO [RS:3;jenkins-hbase4:42869] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:57,934 INFO [RS:2;jenkins-hbase4:35057] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@15be34c0{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:57,934 INFO [RS:1;jenkins-hbase4:33211] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ceb7e75{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:57,937 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:57,937 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:57,941 INFO [RS:1;jenkins-hbase4:33211] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@730a2ea8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:57,935 INFO [RS:3;jenkins-hbase4:42869] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5da1de58{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:57,935 INFO [RS:0;jenkins-hbase4:36389] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:57,940 INFO [RS:2;jenkins-hbase4:35057] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@12ca8bea{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:57,942 INFO [RS:0;jenkins-hbase4:36389] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:57,942 INFO [RS:0;jenkins-hbase4:36389] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:57,942 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(3305): Received CLOSE for 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:57,942 INFO [RS:1;jenkins-hbase4:33211] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:57,942 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:57,943 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:57,943 DEBUG [RS:0;jenkins-hbase4:36389] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6d664fe7 to 127.0.0.1:50636 2023-07-16 14:15:57,943 DEBUG [RS:0;jenkins-hbase4:36389] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,943 INFO [RS:0;jenkins-hbase4:36389] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:57,943 INFO [RS:0;jenkins-hbase4:36389] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:57,943 INFO [RS:0;jenkins-hbase4:36389] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:57,943 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 14:15:57,942 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:57,942 INFO [RS:3;jenkins-hbase4:42869] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@478f422{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:57,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2766fae315acce3173e6c52fdc18b07b, disabling compactions & flushes 2023-07-16 14:15:57,943 INFO [RS:1;jenkins-hbase4:33211] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:57,943 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:57,943 INFO [RS:1;jenkins-hbase4:33211] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:57,943 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:57,944 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(3305): Received CLOSE for bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. after waiting 0 ms 2023-07-16 14:15:57,944 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 14:15:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:57,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 14:15:57,944 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:57,944 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 14:15:57,944 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1478): Online Regions={2766fae315acce3173e6c52fdc18b07b=hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b., 1588230740=hbase:meta,,1.1588230740} 2023-07-16 14:15:57,944 INFO [RS:2;jenkins-hbase4:35057] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:57,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 14:15:57,944 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bcbb66dfa84ee142cb7fccaeec781eac, disabling compactions & flushes 2023-07-16 14:15:57,944 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 14:15:57,944 DEBUG [RS:1;jenkins-hbase4:33211] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x63898e27 to 127.0.0.1:50636 2023-07-16 14:15:57,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 2766fae315acce3173e6c52fdc18b07b 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-16 14:15:57,945 INFO [RS:3;jenkins-hbase4:42869] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 14:15:57,945 INFO [RS:3;jenkins-hbase4:42869] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:57,945 INFO [RS:3;jenkins-hbase4:42869] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:57,945 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:57,945 DEBUG [RS:3;jenkins-hbase4:42869] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x69d39f45 to 127.0.0.1:50636 2023-07-16 14:15:57,945 DEBUG [RS:3;jenkins-hbase4:42869] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,945 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42869,1689516955752; all regions closed. 2023-07-16 14:15:57,944 DEBUG [RS:1;jenkins-hbase4:33211] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,945 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 14:15:57,945 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1478): Online Regions={bcbb66dfa84ee142cb7fccaeec781eac=hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac.} 2023-07-16 14:15:57,945 DEBUG [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1504): Waiting on bcbb66dfa84ee142cb7fccaeec781eac 2023-07-16 14:15:57,944 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:57,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:57,945 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. after waiting 0 ms 2023-07-16 14:15:57,946 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:57,946 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing bcbb66dfa84ee142cb7fccaeec781eac 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-16 14:15:57,944 INFO [RS:2;jenkins-hbase4:35057] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 14:15:57,944 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 14:15:57,944 DEBUG [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1504): Waiting on 1588230740, 2766fae315acce3173e6c52fdc18b07b 2023-07-16 14:15:57,946 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 14:15:57,946 INFO [RS:2;jenkins-hbase4:35057] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 14:15:57,946 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:57,946 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-16 14:15:57,946 DEBUG [RS:2;jenkins-hbase4:35057] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x445c719d to 127.0.0.1:50636 2023-07-16 14:15:57,947 DEBUG [RS:2;jenkins-hbase4:35057] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,947 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35057,1689516954537; all regions closed. 2023-07-16 14:15:57,958 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:57,964 DEBUG [RS:3;jenkins-hbase4:42869] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs 2023-07-16 14:15:57,964 INFO [RS:3;jenkins-hbase4:42869] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42869%2C1689516955752:(num 1689516955968) 2023-07-16 14:15:57,964 DEBUG [RS:3;jenkins-hbase4:42869] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,964 INFO [RS:3;jenkins-hbase4:42869] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:57,965 DEBUG [RS:2;jenkins-hbase4:35057] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs 2023-07-16 14:15:57,965 INFO [RS:2;jenkins-hbase4:35057] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35057%2C1689516954537:(num 1689516954988) 2023-07-16 14:15:57,965 DEBUG [RS:2;jenkins-hbase4:35057] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:57,965 INFO [RS:2;jenkins-hbase4:35057] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:57,968 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:57,968 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:57,977 INFO [RS:3;jenkins-hbase4:42869] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:57,978 INFO [RS:2;jenkins-hbase4:35057] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:57,978 INFO [RS:3;jenkins-hbase4:42869] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:57,978 INFO [RS:3;jenkins-hbase4:42869] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:57,978 INFO [RS:3;jenkins-hbase4:42869] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:57,978 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:57,994 INFO [RS:3;jenkins-hbase4:42869] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42869 2023-07-16 14:15:57,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/.tmp/info/da2a4af44c5747ee92b5880a059e8741 2023-07-16 14:15:58,006 INFO [RS:2;jenkins-hbase4:35057] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:58,006 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:58,006 INFO [RS:2;jenkins-hbase4:35057] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:58,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/.tmp/m/c3f328e42c5544fa88ca15d533ed84db 2023-07-16 14:15:58,023 INFO [RS:2;jenkins-hbase4:35057] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:58,024 INFO [RS:2;jenkins-hbase4:35057] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35057 2023-07-16 14:15:58,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for da2a4af44c5747ee92b5880a059e8741 2023-07-16 14:15:58,029 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/.tmp/info/d74e177245f9449cba267a30c5286024 2023-07-16 14:15:58,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/.tmp/info/da2a4af44c5747ee92b5880a059e8741 as hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/info/da2a4af44c5747ee92b5880a059e8741 2023-07-16 14:15:58,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c3f328e42c5544fa88ca15d533ed84db 2023-07-16 14:15:58,035 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/.tmp/m/c3f328e42c5544fa88ca15d533ed84db as hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/m/c3f328e42c5544fa88ca15d533ed84db 2023-07-16 14:15:58,036 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d74e177245f9449cba267a30c5286024 2023-07-16 14:15:58,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for da2a4af44c5747ee92b5880a059e8741 2023-07-16 14:15:58,039 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/info/da2a4af44c5747ee92b5880a059e8741, entries=3, sequenceid=9, filesize=5.0 K 2023-07-16 14:15:58,040 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 2766fae315acce3173e6c52fdc18b07b in 96ms, sequenceid=9, compaction requested=false 2023-07-16 14:15:58,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c3f328e42c5544fa88ca15d533ed84db 2023-07-16 14:15:58,045 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/m/c3f328e42c5544fa88ca15d533ed84db, entries=12, sequenceid=29, filesize=5.4 K 2023-07-16 14:15:58,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for bcbb66dfa84ee142cb7fccaeec781eac in 101ms, sequenceid=29, compaction requested=false 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42869,1689516955752 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:58,053 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35057,1689516954537 2023-07-16 14:15:58,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/namespace/2766fae315acce3173e6c52fdc18b07b/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 14:15:58,058 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,059 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42869,1689516955752] 2023-07-16 14:15:58,059 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42869,1689516955752; numProcessing=1 2023-07-16 14:15:58,060 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42869,1689516955752 already deleted, retry=false 2023-07-16 14:15:58,060 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42869,1689516955752 expired; onlineServers=3 2023-07-16 14:15:58,060 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35057,1689516954537] 2023-07-16 14:15:58,060 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35057,1689516954537; numProcessing=2 2023-07-16 14:15:58,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:58,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2766fae315acce3173e6c52fdc18b07b: 2023-07-16 14:15:58,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689516955217.2766fae315acce3173e6c52fdc18b07b. 2023-07-16 14:15:58,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/rsgroup/bcbb66dfa84ee142cb7fccaeec781eac/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-16 14:15:58,062 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35057,1689516954537 already deleted, retry=false 2023-07-16 14:15:58,062 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35057,1689516954537 expired; onlineServers=2 2023-07-16 14:15:58,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:58,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:58,065 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bcbb66dfa84ee142cb7fccaeec781eac: 2023-07-16 14:15:58,065 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689516955358.bcbb66dfa84ee142cb7fccaeec781eac. 2023-07-16 14:15:58,065 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/.tmp/rep_barrier/4636e9a1894a41e1b50e05fe93018827 2023-07-16 14:15:58,070 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4636e9a1894a41e1b50e05fe93018827 2023-07-16 14:15:58,080 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/.tmp/table/46be56d471634dd89c9bfebdc82a50eb 2023-07-16 14:15:58,085 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 46be56d471634dd89c9bfebdc82a50eb 2023-07-16 14:15:58,085 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/.tmp/info/d74e177245f9449cba267a30c5286024 as hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/info/d74e177245f9449cba267a30c5286024 2023-07-16 14:15:58,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d74e177245f9449cba267a30c5286024 2023-07-16 14:15:58,090 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/info/d74e177245f9449cba267a30c5286024, entries=22, sequenceid=26, filesize=7.3 K 2023-07-16 14:15:58,091 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/.tmp/rep_barrier/4636e9a1894a41e1b50e05fe93018827 as hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/rep_barrier/4636e9a1894a41e1b50e05fe93018827 2023-07-16 14:15:58,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 4636e9a1894a41e1b50e05fe93018827 2023-07-16 14:15:58,096 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/rep_barrier/4636e9a1894a41e1b50e05fe93018827, entries=1, sequenceid=26, filesize=4.9 K 2023-07-16 14:15:58,097 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/.tmp/table/46be56d471634dd89c9bfebdc82a50eb as hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/table/46be56d471634dd89c9bfebdc82a50eb 2023-07-16 14:15:58,102 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 46be56d471634dd89c9bfebdc82a50eb 2023-07-16 14:15:58,103 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/table/46be56d471634dd89c9bfebdc82a50eb, entries=6, sequenceid=26, filesize=5.1 K 2023-07-16 14:15:58,103 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 157ms, sequenceid=26, compaction requested=false 2023-07-16 14:15:58,111 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-16 14:15:58,112 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 14:15:58,113 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:58,113 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 14:15:58,113 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 14:15:58,145 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33211,1689516954495; all regions closed. 2023-07-16 14:15:58,146 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36389,1689516954455; all regions closed. 2023-07-16 14:15:58,154 DEBUG [RS:1;jenkins-hbase4:33211] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs 2023-07-16 14:15:58,154 INFO [RS:1;jenkins-hbase4:33211] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33211%2C1689516954495:(num 1689516954998) 2023-07-16 14:15:58,154 DEBUG [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs 2023-07-16 14:15:58,154 DEBUG [RS:1;jenkins-hbase4:33211] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:58,154 INFO [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36389%2C1689516954455.meta:.meta(num 1689516955148) 2023-07-16 14:15:58,154 INFO [RS:1;jenkins-hbase4:33211] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:58,154 INFO [RS:1;jenkins-hbase4:33211] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:58,154 INFO [RS:1;jenkins-hbase4:33211] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 14:15:58,154 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:58,154 INFO [RS:1;jenkins-hbase4:33211] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 14:15:58,155 INFO [RS:1;jenkins-hbase4:33211] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 14:15:58,156 INFO [RS:1;jenkins-hbase4:33211] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33211 2023-07-16 14:15:58,159 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:58,159 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,159 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33211,1689516954495 2023-07-16 14:15:58,161 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33211,1689516954495] 2023-07-16 14:15:58,161 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33211,1689516954495; numProcessing=3 2023-07-16 14:15:58,161 DEBUG [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/oldWALs 2023-07-16 14:15:58,161 INFO [RS:0;jenkins-hbase4:36389] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36389%2C1689516954455:(num 1689516954993) 2023-07-16 14:15:58,161 DEBUG [RS:0;jenkins-hbase4:36389] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:58,161 INFO [RS:0;jenkins-hbase4:36389] regionserver.LeaseManager(133): Closed leases 2023-07-16 14:15:58,162 INFO [RS:0;jenkins-hbase4:36389] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 14:15:58,162 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:58,163 INFO [RS:0;jenkins-hbase4:36389] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36389 2023-07-16 14:15:58,163 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33211,1689516954495 already deleted, retry=false 2023-07-16 14:15:58,163 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33211,1689516954495 expired; onlineServers=1 2023-07-16 14:15:58,164 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36389,1689516954455 2023-07-16 14:15:58,164 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 14:15:58,166 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36389,1689516954455] 2023-07-16 14:15:58,166 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36389,1689516954455; numProcessing=4 2023-07-16 14:15:58,167 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36389,1689516954455 already deleted, retry=false 2023-07-16 14:15:58,167 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36389,1689516954455 expired; onlineServers=0 2023-07-16 14:15:58,167 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42483,1689516954385' ***** 2023-07-16 14:15:58,167 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 14:15:58,168 DEBUG [M:0;jenkins-hbase4:42483] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7cff8262, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 14:15:58,168 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 14:15:58,170 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 14:15:58,170 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 14:15:58,170 INFO [M:0;jenkins-hbase4:42483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@105e13de{master,/,null,STOPPED}{file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/master} 2023-07-16 14:15:58,170 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 14:15:58,171 INFO [M:0;jenkins-hbase4:42483] server.AbstractConnector(383): Stopped ServerConnector@22366487{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:58,171 INFO [M:0;jenkins-hbase4:42483] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 14:15:58,171 INFO [M:0;jenkins-hbase4:42483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69823956{static,/static,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/classes/hbase-webapps/static/,STOPPED} 2023-07-16 14:15:58,172 INFO [M:0;jenkins-hbase4:42483] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@633dccf4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/hadoop.log.dir/,STOPPED} 2023-07-16 14:15:58,172 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42483,1689516954385 2023-07-16 14:15:58,172 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42483,1689516954385; all regions closed. 2023-07-16 14:15:58,172 DEBUG [M:0;jenkins-hbase4:42483] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 14:15:58,172 INFO [M:0;jenkins-hbase4:42483] master.HMaster(1491): Stopping master jetty server 2023-07-16 14:15:58,173 INFO [M:0;jenkins-hbase4:42483] server.AbstractConnector(383): Stopped ServerConnector@35983a9a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 14:15:58,173 DEBUG [M:0;jenkins-hbase4:42483] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 14:15:58,173 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 14:15:58,173 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516954742] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689516954742,5,FailOnTimeoutGroup] 2023-07-16 14:15:58,173 DEBUG [M:0;jenkins-hbase4:42483] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 14:15:58,173 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516954742] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689516954742,5,FailOnTimeoutGroup] 2023-07-16 14:15:58,173 INFO [M:0;jenkins-hbase4:42483] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 14:15:58,173 INFO [M:0;jenkins-hbase4:42483] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 14:15:58,174 INFO [M:0;jenkins-hbase4:42483] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 14:15:58,174 DEBUG [M:0;jenkins-hbase4:42483] master.HMaster(1512): Stopping service threads 2023-07-16 14:15:58,174 INFO [M:0;jenkins-hbase4:42483] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 14:15:58,174 ERROR [M:0;jenkins-hbase4:42483] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 14:15:58,174 INFO [M:0;jenkins-hbase4:42483] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 14:15:58,174 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 14:15:58,174 DEBUG [M:0;jenkins-hbase4:42483] zookeeper.ZKUtil(398): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 14:15:58,174 WARN [M:0;jenkins-hbase4:42483] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 14:15:58,174 INFO [M:0;jenkins-hbase4:42483] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 14:15:58,174 INFO [M:0;jenkins-hbase4:42483] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 14:15:58,175 DEBUG [M:0;jenkins-hbase4:42483] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 14:15:58,175 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:58,175 DEBUG [M:0;jenkins-hbase4:42483] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:58,175 DEBUG [M:0;jenkins-hbase4:42483] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 14:15:58,175 DEBUG [M:0;jenkins-hbase4:42483] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:58,175 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.66 KB 2023-07-16 14:15:58,185 INFO [M:0;jenkins-hbase4:42483] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0cdef4c1e2f4458593d2567ea01f1c7a 2023-07-16 14:15:58,190 DEBUG [M:0;jenkins-hbase4:42483] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0cdef4c1e2f4458593d2567ea01f1c7a as hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0cdef4c1e2f4458593d2567ea01f1c7a 2023-07-16 14:15:58,194 INFO [M:0;jenkins-hbase4:42483] regionserver.HStore(1080): Added hdfs://localhost:33443/user/jenkins/test-data/1100ec4f-c102-59cb-fbf8-379b8c9de6c0/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0cdef4c1e2f4458593d2567ea01f1c7a, entries=22, sequenceid=175, filesize=11.1 K 2023-07-16 14:15:58,195 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78011, heapSize ~90.64 KB/92816, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 20ms, sequenceid=175, compaction requested=false 2023-07-16 14:15:58,197 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 14:15:58,197 DEBUG [M:0;jenkins-hbase4:42483] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 14:15:58,199 INFO [M:0;jenkins-hbase4:42483] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 14:15:58,199 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 14:15:58,200 INFO [M:0;jenkins-hbase4:42483] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42483 2023-07-16 14:15:58,201 DEBUG [M:0;jenkins-hbase4:42483] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,42483,1689516954385 already deleted, retry=false 2023-07-16 14:15:58,524 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,524 INFO [M:0;jenkins-hbase4:42483] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42483,1689516954385; zookeeper connection closed. 2023-07-16 14:15:58,524 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): master:42483-0x1016e7d55fe0000, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,624 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,624 INFO [RS:0;jenkins-hbase4:36389] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36389,1689516954455; zookeeper connection closed. 2023-07-16 14:15:58,624 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:36389-0x1016e7d55fe0001, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,624 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@295d59e2] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@295d59e2 2023-07-16 14:15:58,724 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,724 INFO [RS:1;jenkins-hbase4:33211] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33211,1689516954495; zookeeper connection closed. 2023-07-16 14:15:58,724 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:33211-0x1016e7d55fe0002, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,725 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7051435] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7051435 2023-07-16 14:15:58,825 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,825 INFO [RS:2;jenkins-hbase4:35057] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35057,1689516954537; zookeeper connection closed. 2023-07-16 14:15:58,825 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:35057-0x1016e7d55fe0003, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,825 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@9bae67a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@9bae67a 2023-07-16 14:15:58,925 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,925 INFO [RS:3;jenkins-hbase4:42869] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42869,1689516955752; zookeeper connection closed. 2023-07-16 14:15:58,925 DEBUG [Listener at localhost/37985-EventThread] zookeeper.ZKWatcher(600): regionserver:42869-0x1016e7d55fe000b, quorum=127.0.0.1:50636, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 14:15:58,925 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2a831] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2a831 2023-07-16 14:15:58,925 INFO [Listener at localhost/37985] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 14:15:58,926 WARN [Listener at localhost/37985] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:58,929 INFO [Listener at localhost/37985] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:59,033 WARN [BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:59,033 WARN [BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-14668270-172.31.14.131-1689516953602 (Datanode Uuid 8e1c8203-2a11-434a-b6c3-f81853251689) service to localhost/127.0.0.1:33443 2023-07-16 14:15:59,034 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data5/current/BP-14668270-172.31.14.131-1689516953602] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:59,034 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data6/current/BP-14668270-172.31.14.131-1689516953602] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:59,036 WARN [Listener at localhost/37985] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:59,043 INFO [Listener at localhost/37985] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:59,147 WARN [BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:59,147 WARN [BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-14668270-172.31.14.131-1689516953602 (Datanode Uuid aa917b2b-c86d-4321-b990-7013d3a67aa1) service to localhost/127.0.0.1:33443 2023-07-16 14:15:59,147 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data3/current/BP-14668270-172.31.14.131-1689516953602] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:59,148 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data4/current/BP-14668270-172.31.14.131-1689516953602] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:59,148 WARN [Listener at localhost/37985] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 14:15:59,151 INFO [Listener at localhost/37985] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:59,254 WARN [BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 14:15:59,254 WARN [BP-14668270-172.31.14.131-1689516953602 heartbeating to localhost/127.0.0.1:33443] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-14668270-172.31.14.131-1689516953602 (Datanode Uuid 79c2d733-84c9-4322-85e5-83cfb545620f) service to localhost/127.0.0.1:33443 2023-07-16 14:15:59,255 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data1/current/BP-14668270-172.31.14.131-1689516953602] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:59,255 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/efbb17d7-5f35-8b2d-26aa-d56513146a13/cluster_52ee11f9-c150-fc2c-630d-fbaa3dfc1ebd/dfs/data/data2/current/BP-14668270-172.31.14.131-1689516953602] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 14:15:59,264 INFO [Listener at localhost/37985] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 14:15:59,379 INFO [Listener at localhost/37985] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 14:15:59,404 INFO [Listener at localhost/37985] hbase.HBaseTestingUtility(1293): Minicluster is down