2023-07-24 18:10:20,762 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45 2023-07-24 18:10:20,779 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-24 18:10:20,800 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 18:10:20,801 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1, deleteOnExit=true 2023-07-24 18:10:20,801 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 18:10:20,802 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/test.cache.data in system properties and HBase conf 2023-07-24 18:10:20,802 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 18:10:20,803 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir in system properties and HBase conf 2023-07-24 18:10:20,803 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 18:10:20,803 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 18:10:20,804 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 18:10:20,920 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-24 18:10:21,388 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 18:10:21,393 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:10:21,394 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:10:21,394 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 18:10:21,395 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:10:21,395 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 18:10:21,396 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 18:10:21,396 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:10:21,397 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:10:21,397 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 18:10:21,398 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/nfs.dump.dir in system properties and HBase conf 2023-07-24 18:10:21,398 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir in system properties and HBase conf 2023-07-24 18:10:21,398 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:10:21,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 18:10:21,399 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 18:10:21,967 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:10:21,971 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:10:22,257 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-24 18:10:22,442 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-24 18:10:22,461 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:22,501 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:22,551 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/Jetty_localhost_36049_hdfs____.6pwo2s/webapp 2023-07-24 18:10:22,709 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36049 2023-07-24 18:10:22,720 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:10:22,720 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:10:23,184 WARN [Listener at localhost/44625] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:23,273 WARN [Listener at localhost/44625] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:23,293 WARN [Listener at localhost/44625] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:23,300 INFO [Listener at localhost/44625] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:23,306 INFO [Listener at localhost/44625] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/Jetty_localhost_43933_datanode____uhwbxd/webapp 2023-07-24 18:10:23,410 INFO [Listener at localhost/44625] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43933 2023-07-24 18:10:23,857 WARN [Listener at localhost/33527] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:23,879 WARN [Listener at localhost/33527] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:23,886 WARN [Listener at localhost/33527] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:23,888 INFO [Listener at localhost/33527] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:23,895 INFO [Listener at localhost/33527] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/Jetty_localhost_42979_datanode____.nm8irg/webapp 2023-07-24 18:10:24,002 INFO [Listener at localhost/33527] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42979 2023-07-24 18:10:24,013 WARN [Listener at localhost/35249] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:24,034 WARN [Listener at localhost/35249] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:24,038 WARN [Listener at localhost/35249] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:24,041 INFO [Listener at localhost/35249] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:24,056 INFO [Listener at localhost/35249] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/Jetty_localhost_44843_datanode____.b8m4pu/webapp 2023-07-24 18:10:24,251 INFO [Listener at localhost/35249] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44843 2023-07-24 18:10:24,306 WARN [Listener at localhost/39007] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:24,768 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a3e88acfe6ee29d: Processing first storage report for DS-cbeb2446-245e-4c39-86f7-ee43beeea239 from datanode 0d6b3055-580e-49a4-aae1-e80de5350415 2023-07-24 18:10:24,770 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a3e88acfe6ee29d: from storage DS-cbeb2446-245e-4c39-86f7-ee43beeea239 node DatanodeRegistration(127.0.0.1:34623, datanodeUuid=0d6b3055-580e-49a4-aae1-e80de5350415, infoPort=33871, infoSecurePort=0, ipcPort=33527, storageInfo=lv=-57;cid=testClusterID;nsid=1120491340;c=1690222222036), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-24 18:10:24,771 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ba72d64a44870a3: Processing first storage report for DS-2361d440-9ecc-4ffc-8670-240b554a18c1 from datanode 965e7a12-6020-438c-a67f-6b9609687952 2023-07-24 18:10:24,771 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ba72d64a44870a3: from storage DS-2361d440-9ecc-4ffc-8670-240b554a18c1 node DatanodeRegistration(127.0.0.1:36767, datanodeUuid=965e7a12-6020-438c-a67f-6b9609687952, infoPort=44809, infoSecurePort=0, ipcPort=35249, storageInfo=lv=-57;cid=testClusterID;nsid=1120491340;c=1690222222036), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:24,771 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x6a3e88acfe6ee29d: Processing first storage report for DS-08c68b3c-7657-437c-80c8-41e7506d0b23 from datanode 0d6b3055-580e-49a4-aae1-e80de5350415 2023-07-24 18:10:24,771 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x6a3e88acfe6ee29d: from storage DS-08c68b3c-7657-437c-80c8-41e7506d0b23 node DatanodeRegistration(127.0.0.1:34623, datanodeUuid=0d6b3055-580e-49a4-aae1-e80de5350415, infoPort=33871, infoSecurePort=0, ipcPort=33527, storageInfo=lv=-57;cid=testClusterID;nsid=1120491340;c=1690222222036), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:24,771 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9135473ffa9ac867: Processing first storage report for DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c from datanode 766ae196-7e07-47fa-950c-c13d89ace784 2023-07-24 18:10:24,771 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9135473ffa9ac867: from storage DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c node DatanodeRegistration(127.0.0.1:41213, datanodeUuid=766ae196-7e07-47fa-950c-c13d89ace784, infoPort=40635, infoSecurePort=0, ipcPort=39007, storageInfo=lv=-57;cid=testClusterID;nsid=1120491340;c=1690222222036), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 18:10:24,772 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ba72d64a44870a3: Processing first storage report for DS-b853bea9-ffbd-4aec-887a-d5a087ddfd4d from datanode 965e7a12-6020-438c-a67f-6b9609687952 2023-07-24 18:10:24,772 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ba72d64a44870a3: from storage DS-b853bea9-ffbd-4aec-887a-d5a087ddfd4d node DatanodeRegistration(127.0.0.1:36767, datanodeUuid=965e7a12-6020-438c-a67f-6b9609687952, infoPort=44809, infoSecurePort=0, ipcPort=35249, storageInfo=lv=-57;cid=testClusterID;nsid=1120491340;c=1690222222036), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:24,772 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9135473ffa9ac867: Processing first storage report for DS-6883cc15-69ab-42da-907e-22312d3ebe55 from datanode 766ae196-7e07-47fa-950c-c13d89ace784 2023-07-24 18:10:24,772 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9135473ffa9ac867: from storage DS-6883cc15-69ab-42da-907e-22312d3ebe55 node DatanodeRegistration(127.0.0.1:41213, datanodeUuid=766ae196-7e07-47fa-950c-c13d89ace784, infoPort=40635, infoSecurePort=0, ipcPort=39007, storageInfo=lv=-57;cid=testClusterID;nsid=1120491340;c=1690222222036), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:24,916 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45 2023-07-24 18:10:25,002 INFO [Listener at localhost/39007] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/zookeeper_0, clientPort=51807, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 18:10:25,026 INFO [Listener at localhost/39007] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51807 2023-07-24 18:10:25,037 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:25,040 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:25,737 INFO [Listener at localhost/39007] util.FSUtils(471): Created version file at hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f with version=8 2023-07-24 18:10:25,737 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/hbase-staging 2023-07-24 18:10:25,748 DEBUG [Listener at localhost/39007] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 18:10:25,748 DEBUG [Listener at localhost/39007] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 18:10:25,749 DEBUG [Listener at localhost/39007] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 18:10:25,749 DEBUG [Listener at localhost/39007] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 18:10:26,166 INFO [Listener at localhost/39007] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-24 18:10:26,758 INFO [Listener at localhost/39007] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:26,809 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:26,809 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:26,810 INFO [Listener at localhost/39007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:26,810 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:26,810 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:26,981 INFO [Listener at localhost/39007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:27,062 DEBUG [Listener at localhost/39007] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-24 18:10:27,158 INFO [Listener at localhost/39007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46543 2023-07-24 18:10:27,169 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:27,171 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:27,192 INFO [Listener at localhost/39007] zookeeper.RecoverableZooKeeper(93): Process identifier=master:46543 connecting to ZooKeeper ensemble=127.0.0.1:51807 2023-07-24 18:10:27,237 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:465430x0, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:27,243 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:46543-0x1019886e9540000 connected 2023-07-24 18:10:27,311 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:27,312 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:27,316 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:27,325 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46543 2023-07-24 18:10:27,326 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46543 2023-07-24 18:10:27,326 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46543 2023-07-24 18:10:27,327 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46543 2023-07-24 18:10:27,330 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46543 2023-07-24 18:10:27,366 INFO [Listener at localhost/39007] log.Log(170): Logging initialized @7425ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-24 18:10:27,503 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:27,504 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:27,505 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:27,508 INFO [Listener at localhost/39007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 18:10:27,508 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:27,508 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:27,513 INFO [Listener at localhost/39007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:27,596 INFO [Listener at localhost/39007] http.HttpServer(1146): Jetty bound to port 38249 2023-07-24 18:10:27,598 INFO [Listener at localhost/39007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:27,639 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:27,644 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b51199{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:27,645 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:27,645 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4950e91d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:27,833 INFO [Listener at localhost/39007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:27,846 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:27,847 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:27,849 INFO [Listener at localhost/39007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:27,857 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:27,886 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@71e552e9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/jetty-0_0_0_0-38249-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1564711776393232494/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:10:27,898 INFO [Listener at localhost/39007] server.AbstractConnector(333): Started ServerConnector@7f8b825b{HTTP/1.1, (http/1.1)}{0.0.0.0:38249} 2023-07-24 18:10:27,898 INFO [Listener at localhost/39007] server.Server(415): Started @7957ms 2023-07-24 18:10:27,902 INFO [Listener at localhost/39007] master.HMaster(444): hbase.rootdir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f, hbase.cluster.distributed=false 2023-07-24 18:10:27,977 INFO [Listener at localhost/39007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:27,978 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:27,978 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:27,978 INFO [Listener at localhost/39007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:27,979 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:27,979 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:27,986 INFO [Listener at localhost/39007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:27,989 INFO [Listener at localhost/39007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40159 2023-07-24 18:10:27,992 INFO [Listener at localhost/39007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:28,000 DEBUG [Listener at localhost/39007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:28,002 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,004 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,006 INFO [Listener at localhost/39007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:40159 connecting to ZooKeeper ensemble=127.0.0.1:51807 2023-07-24 18:10:28,010 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:401590x0, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:28,012 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:40159-0x1019886e9540001 connected 2023-07-24 18:10:28,012 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:28,016 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:28,017 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:28,019 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40159 2023-07-24 18:10:28,026 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40159 2023-07-24 18:10:28,027 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40159 2023-07-24 18:10:28,028 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40159 2023-07-24 18:10:28,028 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40159 2023-07-24 18:10:28,031 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:28,032 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:28,032 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:28,033 INFO [Listener at localhost/39007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:28,033 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:28,033 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:28,034 INFO [Listener at localhost/39007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:28,036 INFO [Listener at localhost/39007] http.HttpServer(1146): Jetty bound to port 35931 2023-07-24 18:10:28,036 INFO [Listener at localhost/39007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:28,047 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,047 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@705c29b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:28,048 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,048 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@b134c3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:28,201 INFO [Listener at localhost/39007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:28,202 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:28,202 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:28,202 INFO [Listener at localhost/39007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:28,206 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,211 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2068cbfe{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/jetty-0_0_0_0-35931-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2477683747812523958/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:28,212 INFO [Listener at localhost/39007] server.AbstractConnector(333): Started ServerConnector@2cb4cda5{HTTP/1.1, (http/1.1)}{0.0.0.0:35931} 2023-07-24 18:10:28,212 INFO [Listener at localhost/39007] server.Server(415): Started @8271ms 2023-07-24 18:10:28,229 INFO [Listener at localhost/39007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:28,229 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:28,229 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:28,230 INFO [Listener at localhost/39007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:28,230 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:28,230 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:28,230 INFO [Listener at localhost/39007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:28,233 INFO [Listener at localhost/39007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42261 2023-07-24 18:10:28,233 INFO [Listener at localhost/39007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:28,235 DEBUG [Listener at localhost/39007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:28,237 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,239 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,242 INFO [Listener at localhost/39007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42261 connecting to ZooKeeper ensemble=127.0.0.1:51807 2023-07-24 18:10:28,254 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:422610x0, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:28,256 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:422610x0, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:28,257 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42261-0x1019886e9540002 connected 2023-07-24 18:10:28,257 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:28,258 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:28,269 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42261 2023-07-24 18:10:28,270 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42261 2023-07-24 18:10:28,270 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42261 2023-07-24 18:10:28,275 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42261 2023-07-24 18:10:28,278 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42261 2023-07-24 18:10:28,282 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:28,282 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:28,283 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:28,284 INFO [Listener at localhost/39007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:28,284 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:28,284 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:28,284 INFO [Listener at localhost/39007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:28,285 INFO [Listener at localhost/39007] http.HttpServer(1146): Jetty bound to port 38915 2023-07-24 18:10:28,285 INFO [Listener at localhost/39007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:28,299 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,299 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@34863637{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:28,300 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,300 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@454bace{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:28,437 INFO [Listener at localhost/39007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:28,438 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:28,438 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:28,439 INFO [Listener at localhost/39007] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:28,441 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,442 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4bc6a9e2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/jetty-0_0_0_0-38915-hbase-server-2_4_18-SNAPSHOT_jar-_-any-36684737575314211/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:28,443 INFO [Listener at localhost/39007] server.AbstractConnector(333): Started ServerConnector@6b7406ba{HTTP/1.1, (http/1.1)}{0.0.0.0:38915} 2023-07-24 18:10:28,443 INFO [Listener at localhost/39007] server.Server(415): Started @8502ms 2023-07-24 18:10:28,457 INFO [Listener at localhost/39007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:28,458 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:28,458 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:28,458 INFO [Listener at localhost/39007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:28,458 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:28,458 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:28,458 INFO [Listener at localhost/39007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:28,461 INFO [Listener at localhost/39007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46109 2023-07-24 18:10:28,462 INFO [Listener at localhost/39007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:28,470 DEBUG [Listener at localhost/39007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:28,471 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,473 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,475 INFO [Listener at localhost/39007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46109 connecting to ZooKeeper ensemble=127.0.0.1:51807 2023-07-24 18:10:28,481 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:461090x0, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:28,483 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46109-0x1019886e9540003 connected 2023-07-24 18:10:28,483 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:28,484 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:28,485 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:28,495 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46109 2023-07-24 18:10:28,498 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46109 2023-07-24 18:10:28,507 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46109 2023-07-24 18:10:28,513 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46109 2023-07-24 18:10:28,514 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46109 2023-07-24 18:10:28,516 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:28,516 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:28,516 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:28,517 INFO [Listener at localhost/39007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:28,517 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:28,517 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:28,518 INFO [Listener at localhost/39007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:28,519 INFO [Listener at localhost/39007] http.HttpServer(1146): Jetty bound to port 44895 2023-07-24 18:10:28,519 INFO [Listener at localhost/39007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:28,523 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,523 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@709df1b3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:28,523 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,524 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5857c9af{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:28,655 INFO [Listener at localhost/39007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:28,656 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:28,657 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:28,657 INFO [Listener at localhost/39007] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:10:28,658 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:28,659 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6c3aed70{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/jetty-0_0_0_0-44895-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7031555997417457713/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:28,661 INFO [Listener at localhost/39007] server.AbstractConnector(333): Started ServerConnector@6c3f6670{HTTP/1.1, (http/1.1)}{0.0.0.0:44895} 2023-07-24 18:10:28,661 INFO [Listener at localhost/39007] server.Server(415): Started @8719ms 2023-07-24 18:10:28,667 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:28,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5b1cb69d{HTTP/1.1, (http/1.1)}{0.0.0.0:42113} 2023-07-24 18:10:28,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8739ms 2023-07-24 18:10:28,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:28,692 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:10:28,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:28,714 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:28,714 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:28,714 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:28,714 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:28,714 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:28,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:28,720 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:28,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,46543,1690222225966 from backup master directory 2023-07-24 18:10:28,726 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:28,727 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:10:28,728 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:28,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:28,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-24 18:10:28,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-24 18:10:28,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/hbase.id with ID: e9e40271-0f12-407b-a4b1-71d428f14f45 2023-07-24 18:10:28,882 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:28,899 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:28,967 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x09645a10 to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:28,997 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@43ba0dc6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:29,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:29,026 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 18:10:29,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-24 18:10:29,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-24 18:10:29,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:29,055 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:29,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:29,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store-tmp 2023-07-24 18:10:29,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:29,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:10:29,158 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:29,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:29,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:10:29,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:29,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:29,159 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:29,160 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/WALs/jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:29,186 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46543%2C1690222225966, suffix=, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/WALs/jenkins-hbase4.apache.org,46543,1690222225966, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/oldWALs, maxLogs=10 2023-07-24 18:10:29,263 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:29,263 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:29,263 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:29,274 DEBUG [RS-EventLoopGroup-5-2] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-24 18:10:29,352 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/WALs/jenkins-hbase4.apache.org,46543,1690222225966/jenkins-hbase4.apache.org%2C46543%2C1690222225966.1690222229197 2023-07-24 18:10:29,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK], DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK], DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK]] 2023-07-24 18:10:29,353 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:29,354 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:29,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:29,359 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:29,428 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:29,434 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 18:10:29,464 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 18:10:29,477 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:29,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:29,484 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:29,502 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:29,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:29,506 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10652159680, jitterRate=-0.007940322160720825}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:29,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:29,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 18:10:29,531 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 18:10:29,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 18:10:29,536 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 18:10:29,538 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-24 18:10:29,590 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 52 msec 2023-07-24 18:10:29,591 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 18:10:29,620 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 18:10:29,627 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 18:10:29,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 18:10:29,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 18:10:29,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 18:10:29,649 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:29,650 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 18:10:29,651 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 18:10:29,664 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 18:10:29,670 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:29,670 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:29,670 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:29,670 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:29,671 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:29,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,46543,1690222225966, sessionid=0x1019886e9540000, setting cluster-up flag (Was=false) 2023-07-24 18:10:29,691 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:29,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 18:10:29,701 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:29,708 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:29,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 18:10:29,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:29,718 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.hbase-snapshot/.tmp 2023-07-24 18:10:29,766 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(951): ClusterId : e9e40271-0f12-407b-a4b1-71d428f14f45 2023-07-24 18:10:29,766 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(951): ClusterId : e9e40271-0f12-407b-a4b1-71d428f14f45 2023-07-24 18:10:29,766 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(951): ClusterId : e9e40271-0f12-407b-a4b1-71d428f14f45 2023-07-24 18:10:29,773 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:29,773 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:29,773 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:29,782 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:29,782 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:29,782 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:29,783 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:29,782 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:29,783 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:29,788 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:29,788 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:29,789 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:29,798 DEBUG [RS:1;jenkins-hbase4:42261] zookeeper.ReadOnlyZKClient(139): Connect 0x3523a468 to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:29,798 DEBUG [RS:2;jenkins-hbase4:46109] zookeeper.ReadOnlyZKClient(139): Connect 0x59589414 to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:29,798 DEBUG [RS:0;jenkins-hbase4:40159] zookeeper.ReadOnlyZKClient(139): Connect 0x79caa47e to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:29,810 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 18:10:29,814 DEBUG [RS:1;jenkins-hbase4:42261] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39c5dac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:29,815 DEBUG [RS:1;jenkins-hbase4:42261] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@726a62cb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:29,818 DEBUG [RS:2;jenkins-hbase4:46109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@494271f8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:29,819 DEBUG [RS:2;jenkins-hbase4:46109] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@229e7a5b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:29,819 DEBUG [RS:0;jenkins-hbase4:40159] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3cc83523, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:29,820 DEBUG [RS:0;jenkins-hbase4:40159] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5967d70b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:29,822 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 18:10:29,824 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:29,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 18:10:29,827 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 18:10:29,849 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:40159 2023-07-24 18:10:29,849 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46109 2023-07-24 18:10:29,851 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:42261 2023-07-24 18:10:29,858 INFO [RS:1;jenkins-hbase4:42261] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:29,858 INFO [RS:1;jenkins-hbase4:42261] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:29,858 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:29,858 INFO [RS:0;jenkins-hbase4:40159] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:29,860 INFO [RS:0;jenkins-hbase4:40159] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:29,860 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:29,858 INFO [RS:2;jenkins-hbase4:46109] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:29,860 INFO [RS:2;jenkins-hbase4:46109] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:29,861 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:29,864 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:40159, startcode=1690222227976 2023-07-24 18:10:29,864 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:42261, startcode=1690222228228 2023-07-24 18:10:29,864 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:46109, startcode=1690222228457 2023-07-24 18:10:29,888 DEBUG [RS:2;jenkins-hbase4:46109] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:29,888 DEBUG [RS:1;jenkins-hbase4:42261] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:29,888 DEBUG [RS:0;jenkins-hbase4:40159] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:29,971 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:29,985 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48965, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:29,985 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57317, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:29,985 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57697, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:30,000 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:30,016 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:30,018 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:30,048 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:10:30,055 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 18:10:30,055 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 18:10:30,055 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(2830): Master is not running yet 2023-07-24 18:10:30,055 WARN [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 18:10:30,055 WARN [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 18:10:30,055 WARN [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-24 18:10:30,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:10:30,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:10:30,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:10:30,060 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:30,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,072 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690222260072 2023-07-24 18:10:30,076 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 18:10:30,080 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 18:10:30,086 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:30,087 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 18:10:30,090 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 18:10:30,090 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:30,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 18:10:30,091 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 18:10:30,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 18:10:30,097 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,100 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 18:10:30,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 18:10:30,102 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 18:10:30,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 18:10:30,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 18:10:30,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222230108,5,FailOnTimeoutGroup] 2023-07-24 18:10:30,111 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222230111,5,FailOnTimeoutGroup] 2023-07-24 18:10:30,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 18:10:30,125 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,127 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,157 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:40159, startcode=1690222227976 2023-07-24 18:10:30,157 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:42261, startcode=1690222228228 2023-07-24 18:10:30,157 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:46109, startcode=1690222228457 2023-07-24 18:10:30,163 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,165 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:30,167 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:10:30,182 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,182 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:30,182 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 18:10:30,185 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,186 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:30,187 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:10:30,187 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f 2023-07-24 18:10:30,187 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f 2023-07-24 18:10:30,187 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44625 2023-07-24 18:10:30,187 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38249 2023-07-24 18:10:30,187 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44625 2023-07-24 18:10:30,189 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38249 2023-07-24 18:10:30,189 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f 2023-07-24 18:10:30,190 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44625 2023-07-24 18:10:30,190 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38249 2023-07-24 18:10:30,199 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:30,203 DEBUG [RS:1;jenkins-hbase4:42261] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,203 WARN [RS:1;jenkins-hbase4:42261] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:30,204 INFO [RS:1;jenkins-hbase4:42261] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:30,204 DEBUG [RS:0;jenkins-hbase4:40159] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,204 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,204 DEBUG [RS:2;jenkins-hbase4:46109] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,205 WARN [RS:2;jenkins-hbase4:46109] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:30,205 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46109,1690222228457] 2023-07-24 18:10:30,205 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42261,1690222228228] 2023-07-24 18:10:30,205 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,40159,1690222227976] 2023-07-24 18:10:30,204 WARN [RS:0;jenkins-hbase4:40159] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:30,205 INFO [RS:2;jenkins-hbase4:46109] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:30,205 INFO [RS:0;jenkins-hbase4:40159] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:30,210 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,210 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,209 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:30,211 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:30,211 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f 2023-07-24 18:10:30,245 DEBUG [RS:2;jenkins-hbase4:46109] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,245 DEBUG [RS:1;jenkins-hbase4:42261] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,247 DEBUG [RS:0;jenkins-hbase4:40159] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,247 DEBUG [RS:1;jenkins-hbase4:42261] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,249 DEBUG [RS:0;jenkins-hbase4:40159] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,249 DEBUG [RS:2;jenkins-hbase4:46109] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,249 DEBUG [RS:1;jenkins-hbase4:42261] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,251 DEBUG [RS:2;jenkins-hbase4:46109] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,251 DEBUG [RS:0;jenkins-hbase4:40159] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,254 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:30,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:30,260 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info 2023-07-24 18:10:30,261 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:30,262 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:30,262 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:30,265 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:30,265 DEBUG [RS:0;jenkins-hbase4:40159] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:30,265 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:30,265 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:30,272 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:30,273 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:30,274 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:30,276 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table 2023-07-24 18:10:30,277 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:30,278 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:30,280 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740 2023-07-24 18:10:30,283 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740 2023-07-24 18:10:30,286 INFO [RS:0;jenkins-hbase4:40159] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:30,288 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:30,290 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:30,286 INFO [RS:1;jenkins-hbase4:42261] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:30,286 INFO [RS:2;jenkins-hbase4:46109] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:30,294 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:30,295 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10828826400, jitterRate=0.00851304829120636}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:30,295 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:30,295 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:30,296 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:30,296 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:30,296 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:30,296 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:30,303 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:30,303 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:30,309 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:30,309 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 18:10:30,321 INFO [RS:1;jenkins-hbase4:42261] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:30,321 INFO [RS:0;jenkins-hbase4:40159] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:30,321 INFO [RS:2;jenkins-hbase4:46109] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:30,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 18:10:30,328 INFO [RS:2;jenkins-hbase4:46109] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:30,328 INFO [RS:1;jenkins-hbase4:42261] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:30,328 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,328 INFO [RS:0;jenkins-hbase4:40159] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:30,329 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,329 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,331 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:30,332 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:30,333 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:30,339 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 18:10:30,341 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 18:10:30,343 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,343 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,343 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,343 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,344 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:30,345 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:30,344 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:0;jenkins-hbase4:40159] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,345 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:30,346 DEBUG [RS:1;jenkins-hbase4:42261] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,346 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,346 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,346 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,346 DEBUG [RS:2;jenkins-hbase4:46109] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:30,347 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,347 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,347 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,347 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,348 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,348 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,350 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,351 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,351 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,364 INFO [RS:2;jenkins-hbase4:46109] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:30,364 INFO [RS:1;jenkins-hbase4:42261] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:30,364 INFO [RS:0;jenkins-hbase4:40159] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:30,367 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42261,1690222228228-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,367 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40159,1690222227976-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,367 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46109,1690222228457-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:30,385 INFO [RS:1;jenkins-hbase4:42261] regionserver.Replication(203): jenkins-hbase4.apache.org,42261,1690222228228 started 2023-07-24 18:10:30,385 INFO [RS:2;jenkins-hbase4:46109] regionserver.Replication(203): jenkins-hbase4.apache.org,46109,1690222228457 started 2023-07-24 18:10:30,385 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42261,1690222228228, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42261, sessionid=0x1019886e9540002 2023-07-24 18:10:30,385 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46109,1690222228457, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46109, sessionid=0x1019886e9540003 2023-07-24 18:10:30,386 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:30,386 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:30,386 DEBUG [RS:1;jenkins-hbase4:42261] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,386 DEBUG [RS:2;jenkins-hbase4:46109] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,387 DEBUG [RS:1;jenkins-hbase4:42261] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42261,1690222228228' 2023-07-24 18:10:30,387 DEBUG [RS:2;jenkins-hbase4:46109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46109,1690222228457' 2023-07-24 18:10:30,388 DEBUG [RS:2;jenkins-hbase4:46109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:30,388 DEBUG [RS:1;jenkins-hbase4:42261] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:30,389 DEBUG [RS:1;jenkins-hbase4:42261] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:30,389 DEBUG [RS:2;jenkins-hbase4:46109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:30,389 INFO [RS:0;jenkins-hbase4:40159] regionserver.Replication(203): jenkins-hbase4.apache.org,40159,1690222227976 started 2023-07-24 18:10:30,389 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,40159,1690222227976, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:40159, sessionid=0x1019886e9540001 2023-07-24 18:10:30,389 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:30,390 DEBUG [RS:0;jenkins-hbase4:40159] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,390 DEBUG [RS:0;jenkins-hbase4:40159] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40159,1690222227976' 2023-07-24 18:10:30,390 DEBUG [RS:0;jenkins-hbase4:40159] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:30,390 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:30,390 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:30,390 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:30,390 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:30,390 DEBUG [RS:2;jenkins-hbase4:46109] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:30,390 DEBUG [RS:1;jenkins-hbase4:42261] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,390 DEBUG [RS:2;jenkins-hbase4:46109] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46109,1690222228457' 2023-07-24 18:10:30,390 DEBUG [RS:2;jenkins-hbase4:46109] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:30,390 DEBUG [RS:1;jenkins-hbase4:42261] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42261,1690222228228' 2023-07-24 18:10:30,390 DEBUG [RS:1;jenkins-hbase4:42261] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:30,390 DEBUG [RS:0;jenkins-hbase4:40159] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:30,391 DEBUG [RS:2;jenkins-hbase4:46109] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:30,391 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:30,391 DEBUG [RS:1;jenkins-hbase4:42261] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:30,391 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:30,391 DEBUG [RS:0;jenkins-hbase4:40159] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:30,391 DEBUG [RS:0;jenkins-hbase4:40159] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,40159,1690222227976' 2023-07-24 18:10:30,391 DEBUG [RS:0;jenkins-hbase4:40159] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:30,391 DEBUG [RS:2;jenkins-hbase4:46109] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:30,391 INFO [RS:2;jenkins-hbase4:46109] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:30,391 INFO [RS:2;jenkins-hbase4:46109] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:30,392 DEBUG [RS:1;jenkins-hbase4:42261] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:30,392 DEBUG [RS:0;jenkins-hbase4:40159] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:30,392 INFO [RS:1;jenkins-hbase4:42261] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:30,392 INFO [RS:1;jenkins-hbase4:42261] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:30,392 DEBUG [RS:0;jenkins-hbase4:40159] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:30,392 INFO [RS:0;jenkins-hbase4:40159] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:30,392 INFO [RS:0;jenkins-hbase4:40159] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:30,494 DEBUG [jenkins-hbase4:46543] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:10:30,508 INFO [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46109%2C1690222228457, suffix=, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,46109,1690222228457, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs, maxLogs=32 2023-07-24 18:10:30,508 INFO [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42261%2C1690222228228, suffix=, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,42261,1690222228228, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs, maxLogs=32 2023-07-24 18:10:30,511 DEBUG [jenkins-hbase4:46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:30,513 INFO [RS:0;jenkins-hbase4:40159] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40159%2C1690222227976, suffix=, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,40159,1690222227976, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs, maxLogs=32 2023-07-24 18:10:30,513 DEBUG [jenkins-hbase4:46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:30,513 DEBUG [jenkins-hbase4:46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:30,513 DEBUG [jenkins-hbase4:46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:30,513 DEBUG [jenkins-hbase4:46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:30,517 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42261,1690222228228, state=OPENING 2023-07-24 18:10:30,538 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 18:10:30,546 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:30,546 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:30,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:30,555 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:30,567 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:30,569 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:30,567 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:30,571 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:30,571 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:30,576 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:30,576 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:30,576 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:30,589 INFO [RS:0;jenkins-hbase4:40159] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,40159,1690222227976/jenkins-hbase4.apache.org%2C40159%2C1690222227976.1690222230517 2023-07-24 18:10:30,590 DEBUG [RS:0;jenkins-hbase4:40159] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK], DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK], DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK]] 2023-07-24 18:10:30,591 INFO [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,42261,1690222228228/jenkins-hbase4.apache.org%2C42261%2C1690222228228.1690222230517 2023-07-24 18:10:30,591 DEBUG [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK], DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK], DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK]] 2023-07-24 18:10:30,593 INFO [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,46109,1690222228457/jenkins-hbase4.apache.org%2C46109%2C1690222228457.1690222230523 2023-07-24 18:10:30,594 DEBUG [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK], DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK], DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK]] 2023-07-24 18:10:30,647 WARN [ReadOnlyZKClient-127.0.0.1:51807@0x09645a10] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 18:10:30,674 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:30,678 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34722, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:30,679 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42261] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34722 deadline: 1690222290679, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,764 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:30,769 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:30,775 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34738, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:30,791 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:10:30,792 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:30,796 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42261%2C1690222228228.meta, suffix=.meta, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,42261,1690222228228, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs, maxLogs=32 2023-07-24 18:10:30,815 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:30,816 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:30,817 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:30,828 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,42261,1690222228228/jenkins-hbase4.apache.org%2C42261%2C1690222228228.meta.1690222230797.meta 2023-07-24 18:10:30,828 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK], DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK], DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK]] 2023-07-24 18:10:30,829 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:30,830 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:30,833 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:10:30,835 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:10:30,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:10:30,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:30,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:10:30,841 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:10:30,844 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:30,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info 2023-07-24 18:10:30,846 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info 2023-07-24 18:10:30,846 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:30,847 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:30,847 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:30,849 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:30,849 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:30,849 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:30,850 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:30,850 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:30,852 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table 2023-07-24 18:10:30,852 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table 2023-07-24 18:10:30,852 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:30,853 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:30,854 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740 2023-07-24 18:10:30,857 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740 2023-07-24 18:10:30,861 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:30,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:30,865 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11146428960, jitterRate=0.03809209167957306}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:30,865 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:30,885 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690222230759 2023-07-24 18:10:30,904 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:10:30,905 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:10:30,905 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42261,1690222228228, state=OPEN 2023-07-24 18:10:30,908 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:30,908 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:30,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 18:10:30,912 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42261,1690222228228 in 353 msec 2023-07-24 18:10:30,917 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 18:10:30,917 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 591 msec 2023-07-24 18:10:30,924 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0840 sec 2023-07-24 18:10:30,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690222230924, completionTime=-1 2023-07-24 18:10:30,925 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 18:10:30,925 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 18:10:30,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 18:10:30,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690222290997 2023-07-24 18:10:30,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690222350997 2023-07-24 18:10:30,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 72 msec 2023-07-24 18:10:31,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46543,1690222225966-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:31,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46543,1690222225966-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:31,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46543,1690222225966-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:31,016 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:46543, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:31,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:31,024 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 18:10:31,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 18:10:31,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:31,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 18:10:31,052 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:31,055 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:31,072 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,076 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 empty. 2023-07-24 18:10:31,077 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,077 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 18:10:31,115 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:31,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7a7f564afa8892e109c3421f089102f9, NAME => 'hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:31,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:31,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 7a7f564afa8892e109c3421f089102f9, disabling compactions & flushes 2023-07-24 18:10:31,137 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. after waiting 0 ms 2023-07-24 18:10:31,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,137 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,137 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 7a7f564afa8892e109c3421f089102f9: 2023-07-24 18:10:31,142 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:31,160 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222231145"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222231145"}]},"ts":"1690222231145"} 2023-07-24 18:10:31,189 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:31,191 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:31,196 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:31,197 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222231191"}]},"ts":"1690222231191"} 2023-07-24 18:10:31,199 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 18:10:31,201 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:31,203 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 18:10:31,204 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:31,208 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:31,209 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:31,209 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:31,209 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:31,209 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:31,209 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,210 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 empty. 2023-07-24 18:10:31,211 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, ASSIGN}] 2023-07-24 18:10:31,211 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,211 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 18:10:31,213 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, ASSIGN 2023-07-24 18:10:31,219 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:31,254 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:31,256 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f78657a0e379a4435cf47a889f576b52, NAME => 'hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:31,284 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:31,285 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f78657a0e379a4435cf47a889f576b52, disabling compactions & flushes 2023-07-24 18:10:31,285 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,285 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,285 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. after waiting 0 ms 2023-07-24 18:10:31,285 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,285 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,285 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f78657a0e379a4435cf47a889f576b52: 2023-07-24 18:10:31,290 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:31,292 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222231291"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222231291"}]},"ts":"1690222231291"} 2023-07-24 18:10:31,296 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:31,297 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:31,298 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222231298"}]},"ts":"1690222231298"} 2023-07-24 18:10:31,303 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 18:10:31,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:31,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:31,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:31,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:31,308 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:31,308 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, ASSIGN}] 2023-07-24 18:10:31,312 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, ASSIGN 2023-07-24 18:10:31,319 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:31,320 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 18:10:31,322 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7a7f564afa8892e109c3421f089102f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:31,323 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f78657a0e379a4435cf47a889f576b52, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:31,323 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222231322"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222231322"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222231322"}]},"ts":"1690222231322"} 2023-07-24 18:10:31,323 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222231323"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222231323"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222231323"}]},"ts":"1690222231323"} 2023-07-24 18:10:31,330 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:31,333 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:31,495 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f78657a0e379a4435cf47a889f576b52, NAME => 'hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:31,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:31,495 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. service=MultiRowMutationService 2023-07-24 18:10:31,497 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:10:31,498 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,498 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:31,498 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,498 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,501 INFO [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,504 DEBUG [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m 2023-07-24 18:10:31,504 DEBUG [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m 2023-07-24 18:10:31,505 INFO [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f78657a0e379a4435cf47a889f576b52 columnFamilyName m 2023-07-24 18:10:31,506 INFO [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] regionserver.HStore(310): Store=f78657a0e379a4435cf47a889f576b52/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:31,507 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,514 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:31,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:31,518 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f78657a0e379a4435cf47a889f576b52; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@e1851eb, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:31,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f78657a0e379a4435cf47a889f576b52: 2023-07-24 18:10:31,520 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52., pid=8, masterSystemTime=1690222231485 2023-07-24 18:10:31,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:31,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a7f564afa8892e109c3421f089102f9, NAME => 'hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:31,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:31,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,527 INFO [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,530 DEBUG [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info 2023-07-24 18:10:31,530 DEBUG [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info 2023-07-24 18:10:31,530 INFO [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a7f564afa8892e109c3421f089102f9 columnFamilyName info 2023-07-24 18:10:31,531 INFO [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] regionserver.HStore(310): Store=7a7f564afa8892e109c3421f089102f9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:31,534 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,536 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f78657a0e379a4435cf47a889f576b52, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:31,536 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222231535"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222231535"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222231535"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222231535"}]},"ts":"1690222231535"} 2023-07-24 18:10:31,541 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:31,545 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:31,545 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-24 18:10:31,546 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,42261,1690222228228 in 210 msec 2023-07-24 18:10:31,547 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a7f564afa8892e109c3421f089102f9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11217863040, jitterRate=0.044744908809661865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:31,547 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a7f564afa8892e109c3421f089102f9: 2023-07-24 18:10:31,553 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9., pid=9, masterSystemTime=1690222231485 2023-07-24 18:10:31,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-24 18:10:31,558 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, ASSIGN in 238 msec 2023-07-24 18:10:31,558 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,558 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:31,561 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=7a7f564afa8892e109c3421f089102f9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:31,561 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:31,561 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222231560"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222231560"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222231560"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222231560"}]},"ts":"1690222231560"} 2023-07-24 18:10:31,561 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222231561"}]},"ts":"1690222231561"} 2023-07-24 18:10:31,568 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 18:10:31,571 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-24 18:10:31,573 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,42261,1690222228228 in 234 msec 2023-07-24 18:10:31,575 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:31,578 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-24 18:10:31,578 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, ASSIGN in 362 msec 2023-07-24 18:10:31,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 380 msec 2023-07-24 18:10:31,579 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:31,579 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222231579"}]},"ts":"1690222231579"} 2023-07-24 18:10:31,582 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 18:10:31,585 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:31,587 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 546 msec 2023-07-24 18:10:31,627 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 18:10:31,627 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 18:10:31,652 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 18:10:31,654 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:31,654 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:31,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 18:10:31,689 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:31,696 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 32 msec 2023-07-24 18:10:31,699 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:31,699 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:31,699 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 18:10:31,703 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:31,710 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 18:10:31,713 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:31,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 18 msec 2023-07-24 18:10:31,725 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:10:31,733 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 18:10:31,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.005sec 2023-07-24 18:10:31,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 18:10:31,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 18:10:31,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 18:10:31,739 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46543,1690222225966-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 18:10:31,740 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46543,1690222225966-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 18:10:31,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 18:10:31,776 DEBUG [Listener at localhost/39007] zookeeper.ReadOnlyZKClient(139): Connect 0x5c97849b to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:31,783 DEBUG [Listener at localhost/39007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@39b957cc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:31,804 DEBUG [hconnection-0x6043b73e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:31,818 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34752, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:31,829 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:31,830 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:31,841 DEBUG [Listener at localhost/39007] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 18:10:31,849 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42402, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 18:10:31,868 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:10:31,868 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:31,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 18:10:31,876 DEBUG [Listener at localhost/39007] zookeeper.ReadOnlyZKClient(139): Connect 0x1a021b1a to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:31,883 DEBUG [Listener at localhost/39007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b070797, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:31,884 INFO [Listener at localhost/39007] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51807 2023-07-24 18:10:31,890 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:31,891 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019886e954000a connected 2023-07-24 18:10:31,926 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=420, OpenFileDescriptor=671, MaxFileDescriptor=60000, SystemLoadAverage=534, ProcessCount=177, AvailableMemoryMB=6564 2023-07-24 18:10:31,928 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-24 18:10:31,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:31,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:32,010 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:32,024 INFO [Listener at localhost/39007] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:32,029 INFO [Listener at localhost/39007] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34389 2023-07-24 18:10:32,029 INFO [Listener at localhost/39007] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:32,030 DEBUG [Listener at localhost/39007] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:32,032 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:32,035 INFO [Listener at localhost/39007] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:32,038 INFO [Listener at localhost/39007] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34389 connecting to ZooKeeper ensemble=127.0.0.1:51807 2023-07-24 18:10:32,043 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:343890x0, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:32,044 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(162): regionserver:343890x0, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:32,045 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34389-0x1019886e954000b connected 2023-07-24 18:10:32,046 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(162): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 18:10:32,047 DEBUG [Listener at localhost/39007] zookeeper.ZKUtil(164): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:32,051 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34389 2023-07-24 18:10:32,051 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34389 2023-07-24 18:10:32,052 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34389 2023-07-24 18:10:32,052 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34389 2023-07-24 18:10:32,052 DEBUG [Listener at localhost/39007] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34389 2023-07-24 18:10:32,055 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:32,055 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:32,056 INFO [Listener at localhost/39007] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:32,057 INFO [Listener at localhost/39007] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:32,057 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:32,057 INFO [Listener at localhost/39007] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:32,057 INFO [Listener at localhost/39007] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:32,058 INFO [Listener at localhost/39007] http.HttpServer(1146): Jetty bound to port 43329 2023-07-24 18:10:32,058 INFO [Listener at localhost/39007] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:32,062 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:32,062 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41aa49ad{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:32,063 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:32,063 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@552c5220{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:32,201 INFO [Listener at localhost/39007] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:32,202 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:32,203 INFO [Listener at localhost/39007] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:32,203 INFO [Listener at localhost/39007] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:10:32,207 INFO [Listener at localhost/39007] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:32,209 INFO [Listener at localhost/39007] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@591c5ecf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/java.io.tmpdir/jetty-0_0_0_0-43329-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8433758953128272064/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:32,211 INFO [Listener at localhost/39007] server.AbstractConnector(333): Started ServerConnector@157f8446{HTTP/1.1, (http/1.1)}{0.0.0.0:43329} 2023-07-24 18:10:32,211 INFO [Listener at localhost/39007] server.Server(415): Started @12270ms 2023-07-24 18:10:32,215 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(951): ClusterId : e9e40271-0f12-407b-a4b1-71d428f14f45 2023-07-24 18:10:32,217 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:32,223 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:32,223 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:32,226 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:32,228 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ReadOnlyZKClient(139): Connect 0x13db7840 to 127.0.0.1:51807 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:32,250 DEBUG [RS:3;jenkins-hbase4:34389] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1d1c944b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:32,250 DEBUG [RS:3;jenkins-hbase4:34389] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19231330, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:32,264 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:34389 2023-07-24 18:10:32,264 INFO [RS:3;jenkins-hbase4:34389] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:32,264 INFO [RS:3;jenkins-hbase4:34389] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:32,264 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:32,265 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,46543,1690222225966 with isa=jenkins-hbase4.apache.org/172.31.14.131:34389, startcode=1690222232023 2023-07-24 18:10:32,266 DEBUG [RS:3;jenkins-hbase4:34389] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:32,271 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50151, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:32,271 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46543] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,272 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:32,272 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f 2023-07-24 18:10:32,272 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44625 2023-07-24 18:10:32,272 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38249 2023-07-24 18:10:32,279 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:32,279 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:32,279 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:32,279 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:32,280 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ZKUtil(162): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,280 WARN [RS:3;jenkins-hbase4:34389] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:32,280 INFO [RS:3;jenkins-hbase4:34389] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:32,281 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,281 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34389,1690222232023] 2023-07-24 18:10:32,281 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:32,281 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:32,282 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:32,282 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:32,282 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:32,283 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:32,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:32,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:32,283 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:32,291 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,46543,1690222225966] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 18:10:32,291 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:32,291 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:32,291 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,295 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ZKUtil(162): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:32,296 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ZKUtil(162): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:32,296 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ZKUtil(162): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:32,297 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ZKUtil(162): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,298 DEBUG [RS:3;jenkins-hbase4:34389] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:32,298 INFO [RS:3;jenkins-hbase4:34389] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:32,359 INFO [RS:3;jenkins-hbase4:34389] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:32,375 INFO [RS:3;jenkins-hbase4:34389] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:32,375 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:32,383 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:32,386 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,386 DEBUG [RS:3;jenkins-hbase4:34389] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:32,396 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:32,396 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:32,396 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:32,412 INFO [RS:3;jenkins-hbase4:34389] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:32,412 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34389,1690222232023-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:32,433 INFO [RS:3;jenkins-hbase4:34389] regionserver.Replication(203): jenkins-hbase4.apache.org,34389,1690222232023 started 2023-07-24 18:10:32,433 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34389,1690222232023, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34389, sessionid=0x1019886e954000b 2023-07-24 18:10:32,433 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:32,433 DEBUG [RS:3;jenkins-hbase4:34389] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,433 DEBUG [RS:3;jenkins-hbase4:34389] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34389,1690222232023' 2023-07-24 18:10:32,433 DEBUG [RS:3;jenkins-hbase4:34389] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:32,438 DEBUG [RS:3;jenkins-hbase4:34389] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:32,439 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:32,439 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:32,439 DEBUG [RS:3;jenkins-hbase4:34389] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:32,439 DEBUG [RS:3;jenkins-hbase4:34389] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34389,1690222232023' 2023-07-24 18:10:32,439 DEBUG [RS:3;jenkins-hbase4:34389] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:32,440 DEBUG [RS:3;jenkins-hbase4:34389] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:32,441 DEBUG [RS:3;jenkins-hbase4:34389] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:32,441 INFO [RS:3;jenkins-hbase4:34389] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:10:32,441 INFO [RS:3;jenkins-hbase4:34389] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:10:32,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:32,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:32,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:32,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:32,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:32,477 DEBUG [hconnection-0x2f235fbd-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:32,487 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34756, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:32,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:32,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:32,509 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:32,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:32,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:42402 deadline: 1690223432508, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:32,510 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:32,513 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:32,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:32,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:32,515 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:32,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:32,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:32,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:32,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:32,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:32,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:32,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:32,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:32,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:32,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:32,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,544 INFO [RS:3;jenkins-hbase4:34389] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34389%2C1690222232023, suffix=, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,34389,1690222232023, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs, maxLogs=32 2023-07-24 18:10:32,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:32,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:32,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:32,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:32,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to default 2023-07-24 18:10:32,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:32,584 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:32,585 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:32,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:32,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:32,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:32,597 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:32,607 INFO [RS:3;jenkins-hbase4:34389] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,34389,1690222232023/jenkins-hbase4.apache.org%2C34389%2C1690222232023.1690222232545 2023-07-24 18:10:32,607 DEBUG [RS:3;jenkins-hbase4:34389] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK], DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK], DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK]] 2023-07-24 18:10:32,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:32,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:32,616 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:32,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 12 2023-07-24 18:10:32,622 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:32,623 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:32,624 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:32,624 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:32,635 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:32,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:10:32,643 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:32,645 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 empty. 2023-07-24 18:10:32,645 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:32,646 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:32,646 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:32,646 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 empty. 2023-07-24 18:10:32,646 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:32,648 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:32,650 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 empty. 2023-07-24 18:10:32,651 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe empty. 2023-07-24 18:10:32,651 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df empty. 2023-07-24 18:10:32,651 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:32,651 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:32,652 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:32,652 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:32,652 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 18:10:32,695 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:32,699 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 29df854c0b07f2f49169652a88be9d34, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:32,702 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => cdbbcb39299a3a101b22b1786703a9c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:32,707 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b92c1e9153318b2fe02f35f9efe9cc59, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:32,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:10:32,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:32,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 29df854c0b07f2f49169652a88be9d34, disabling compactions & flushes 2023-07-24 18:10:32,798 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:32,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:32,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. after waiting 0 ms 2023-07-24 18:10:32,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:32,798 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:32,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 29df854c0b07f2f49169652a88be9d34: 2023-07-24 18:10:32,799 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 15a0bd689fe00dbd4c569ae65cae10df, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:32,800 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:32,800 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing cdbbcb39299a3a101b22b1786703a9c7, disabling compactions & flushes 2023-07-24 18:10:32,801 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:32,801 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:32,801 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. after waiting 0 ms 2023-07-24 18:10:32,801 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:32,801 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:32,801 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for cdbbcb39299a3a101b22b1786703a9c7: 2023-07-24 18:10:32,801 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 1a8e87aba13653275f59e3df65b3f4fe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:32,824 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:32,825 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b92c1e9153318b2fe02f35f9efe9cc59, disabling compactions & flushes 2023-07-24 18:10:32,826 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:32,826 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:32,826 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. after waiting 0 ms 2023-07-24 18:10:32,826 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:32,826 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:32,826 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b92c1e9153318b2fe02f35f9efe9cc59: 2023-07-24 18:10:32,850 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:32,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 1a8e87aba13653275f59e3df65b3f4fe, disabling compactions & flushes 2023-07-24 18:10:32,851 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:32,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:32,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. after waiting 0 ms 2023-07-24 18:10:32,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:32,851 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:32,851 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 1a8e87aba13653275f59e3df65b3f4fe: 2023-07-24 18:10:32,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:32,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 15a0bd689fe00dbd4c569ae65cae10df, disabling compactions & flushes 2023-07-24 18:10:32,852 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:32,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:32,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. after waiting 0 ms 2023-07-24 18:10:32,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:32,852 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:32,852 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 15a0bd689fe00dbd4c569ae65cae10df: 2023-07-24 18:10:32,857 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:32,859 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222232859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222232859"}]},"ts":"1690222232859"} 2023-07-24 18:10:32,859 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222232859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222232859"}]},"ts":"1690222232859"} 2023-07-24 18:10:32,860 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222232859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222232859"}]},"ts":"1690222232859"} 2023-07-24 18:10:32,860 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222232859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222232859"}]},"ts":"1690222232859"} 2023-07-24 18:10:32,860 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222232859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222232859"}]},"ts":"1690222232859"} 2023-07-24 18:10:32,917 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 18:10:32,919 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:32,920 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222232920"}]},"ts":"1690222232920"} 2023-07-24 18:10:32,922 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 18:10:32,933 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:32,934 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:32,934 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:32,934 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:32,934 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, ASSIGN}, {pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, ASSIGN}, {pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, ASSIGN}, {pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, ASSIGN}, {pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, ASSIGN}] 2023-07-24 18:10:32,937 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, ASSIGN 2023-07-24 18:10:32,938 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, ASSIGN 2023-07-24 18:10:32,938 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, ASSIGN 2023-07-24 18:10:32,938 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, ASSIGN 2023-07-24 18:10:32,939 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, ASSIGN 2023-07-24 18:10:32,940 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:32,940 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:32,940 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=15, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:32,940 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:32,941 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:32,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:10:33,090 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 18:10:33,093 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,093 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,093 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,093 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:33,093 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:33,094 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233093"}]},"ts":"1690222233093"} 2023-07-24 18:10:33,094 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222233093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233093"}]},"ts":"1690222233093"} 2023-07-24 18:10:33,094 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222233093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233093"}]},"ts":"1690222233093"} 2023-07-24 18:10:33,094 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233093"}]},"ts":"1690222233093"} 2023-07-24 18:10:33,093 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233093"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233093"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233093"}]},"ts":"1690222233093"} 2023-07-24 18:10:33,097 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:33,098 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=16, state=RUNNABLE; OpenRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:33,100 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=20, ppid=13, state=RUNNABLE; OpenRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:33,102 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=14, state=RUNNABLE; OpenRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:33,104 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=15, state=RUNNABLE; OpenRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:33,251 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,251 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:33,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:10:33,256 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42188, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:33,264 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:33,265 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:33,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b92c1e9153318b2fe02f35f9efe9cc59, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 18:10:33,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdbbcb39299a3a101b22b1786703a9c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 18:10:33,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:33,265 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:33,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,268 INFO [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,271 DEBUG [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/f 2023-07-24 18:10:33,271 INFO [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,271 DEBUG [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/f 2023-07-24 18:10:33,272 INFO [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdbbcb39299a3a101b22b1786703a9c7 columnFamilyName f 2023-07-24 18:10:33,274 INFO [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] regionserver.HStore(310): Store=cdbbcb39299a3a101b22b1786703a9c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:33,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,276 DEBUG [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/f 2023-07-24 18:10:33,277 DEBUG [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/f 2023-07-24 18:10:33,277 INFO [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b92c1e9153318b2fe02f35f9efe9cc59 columnFamilyName f 2023-07-24 18:10:33,277 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,278 INFO [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] regionserver.HStore(310): Store=b92c1e9153318b2fe02f35f9efe9cc59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:33,282 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,284 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:33,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:33,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:33,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cdbbcb39299a3a101b22b1786703a9c7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11805176640, jitterRate=0.0994427502155304}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:33,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cdbbcb39299a3a101b22b1786703a9c7: 2023-07-24 18:10:33,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7., pid=21, masterSystemTime=1690222233251 2023-07-24 18:10:33,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:33,301 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b92c1e9153318b2fe02f35f9efe9cc59; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10380439840, jitterRate=-0.033246204257011414}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:33,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b92c1e9153318b2fe02f35f9efe9cc59: 2023-07-24 18:10:33,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:33,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:33,303 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:33,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1a8e87aba13653275f59e3df65b3f4fe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 18:10:33,307 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:33,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:33,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,307 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233306"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222233306"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222233306"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222233306"}]},"ts":"1690222233306"} 2023-07-24 18:10:33,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59., pid=22, masterSystemTime=1690222233251 2023-07-24 18:10:33,307 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,315 INFO [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,317 DEBUG [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/f 2023-07-24 18:10:33,317 DEBUG [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/f 2023-07-24 18:10:33,318 INFO [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a8e87aba13653275f59e3df65b3f4fe columnFamilyName f 2023-07-24 18:10:33,319 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:33,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:33,320 INFO [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] regionserver.HStore(310): Store=1a8e87aba13653275f59e3df65b3f4fe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:33,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:33,321 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=15 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 29df854c0b07f2f49169652a88be9d34, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 18:10:33,322 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233320"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222233320"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222233320"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222233320"}]},"ts":"1690222233320"} 2023-07-24 18:10:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,322 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,323 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,326 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,326 INFO [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,331 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=14 2023-07-24 18:10:33,331 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=14, state=SUCCESS; OpenRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,42261,1690222228228 in 217 msec 2023-07-24 18:10:33,333 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=15 2023-07-24 18:10:33,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, ASSIGN in 397 msec 2023-07-24 18:10:33,334 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=15, state=SUCCESS; OpenRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,46109,1690222228457 in 223 msec 2023-07-24 18:10:33,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:33,336 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, ASSIGN in 400 msec 2023-07-24 18:10:33,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:33,339 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1a8e87aba13653275f59e3df65b3f4fe; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10975746240, jitterRate=0.022196024656295776}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:33,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1a8e87aba13653275f59e3df65b3f4fe: 2023-07-24 18:10:33,342 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe., pid=18, masterSystemTime=1690222233251 2023-07-24 18:10:33,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:33,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:33,346 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:33,346 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222233345"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222233345"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222233345"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222233345"}]},"ts":"1690222233345"} 2023-07-24 18:10:33,346 DEBUG [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/f 2023-07-24 18:10:33,346 DEBUG [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/f 2023-07-24 18:10:33,347 INFO [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 29df854c0b07f2f49169652a88be9d34 columnFamilyName f 2023-07-24 18:10:33,350 INFO [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] regionserver.HStore(310): Store=29df854c0b07f2f49169652a88be9d34/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:33,352 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,354 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 18:10:33,355 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,42261,1690222228228 in 252 msec 2023-07-24 18:10:33,357 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, ASSIGN in 421 msec 2023-07-24 18:10:33,358 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:33,361 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:33,362 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 29df854c0b07f2f49169652a88be9d34; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10245378080, jitterRate=-0.0458248108625412}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:33,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 29df854c0b07f2f49169652a88be9d34: 2023-07-24 18:10:33,363 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34., pid=20, masterSystemTime=1690222233251 2023-07-24 18:10:33,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:33,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:33,371 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:33,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15a0bd689fe00dbd4c569ae65cae10df, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 18:10:33,371 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,373 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,373 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222233372"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222233372"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222233372"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222233372"}]},"ts":"1690222233372"} 2023-07-24 18:10:33,380 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=13 2023-07-24 18:10:33,380 INFO [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,380 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=13, state=SUCCESS; OpenRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,46109,1690222228457 in 276 msec 2023-07-24 18:10:33,385 DEBUG [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/f 2023-07-24 18:10:33,385 DEBUG [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/f 2023-07-24 18:10:33,386 INFO [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15a0bd689fe00dbd4c569ae65cae10df columnFamilyName f 2023-07-24 18:10:33,387 INFO [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] regionserver.HStore(310): Store=15a0bd689fe00dbd4c569ae65cae10df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:33,389 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,390 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, ASSIGN in 449 msec 2023-07-24 18:10:33,390 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:33,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:33,400 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 15a0bd689fe00dbd4c569ae65cae10df; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10724873600, jitterRate=-0.0011683106422424316}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:33,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 15a0bd689fe00dbd4c569ae65cae10df: 2023-07-24 18:10:33,402 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df., pid=19, masterSystemTime=1690222233251 2023-07-24 18:10:33,406 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,406 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:33,406 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:33,407 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233406"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222233406"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222233406"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222233406"}]},"ts":"1690222233406"} 2023-07-24 18:10:33,424 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=16 2023-07-24 18:10:33,424 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=16, state=SUCCESS; OpenRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,46109,1690222228457 in 312 msec 2023-07-24 18:10:33,428 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-24 18:10:33,429 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, ASSIGN in 490 msec 2023-07-24 18:10:33,433 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:33,434 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222233433"}]},"ts":"1690222233433"} 2023-07-24 18:10:33,438 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 18:10:33,441 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:33,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 831 msec 2023-07-24 18:10:33,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:10:33,757 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 12 completed 2023-07-24 18:10:33,757 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-24 18:10:33,758 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:33,765 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-24 18:10:33,766 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:33,766 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-24 18:10:33,767 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:33,772 DEBUG [Listener at localhost/39007] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:33,778 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41142, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:33,782 DEBUG [Listener at localhost/39007] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:33,786 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33698, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:33,787 DEBUG [Listener at localhost/39007] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:33,796 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34766, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:33,798 DEBUG [Listener at localhost/39007] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:33,802 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42192, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:33,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:33,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:33,823 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:33,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:33,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:33,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,848 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region 29df854c0b07f2f49169652a88be9d34 to RSGroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:33,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:33,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:33,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:33,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:33,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, REOPEN/MOVE 2023-07-24 18:10:33,852 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, REOPEN/MOVE 2023-07-24 18:10:33,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region cdbbcb39299a3a101b22b1786703a9c7 to RSGroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:33,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:33,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:33,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:33,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:33,853 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,854 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222233853"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233853"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233853"}]},"ts":"1690222233853"} 2023-07-24 18:10:33,854 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, REOPEN/MOVE 2023-07-24 18:10:33,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region b92c1e9153318b2fe02f35f9efe9cc59 to RSGroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,855 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, REOPEN/MOVE 2023-07-24 18:10:33,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:33,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:33,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:33,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:33,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:33,857 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:33,857 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233857"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233857"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233857"}]},"ts":"1690222233857"} 2023-07-24 18:10:33,858 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; CloseRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:33,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, REOPEN/MOVE 2023-07-24 18:10:33,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=24, state=RUNNABLE; CloseRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:33,863 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region 15a0bd689fe00dbd4c569ae65cae10df to RSGroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,864 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, REOPEN/MOVE 2023-07-24 18:10:33,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:33,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:33,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:33,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:33,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:33,868 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,868 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233868"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233868"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233868"}]},"ts":"1690222233868"} 2023-07-24 18:10:33,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, REOPEN/MOVE 2023-07-24 18:10:33,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region 1a8e87aba13653275f59e3df65b3f4fe to RSGroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:33,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:33,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:33,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:33,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:33,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:33,872 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, REOPEN/MOVE 2023-07-24 18:10:33,874 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=30, ppid=25, state=RUNNABLE; CloseRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:33,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, REOPEN/MOVE 2023-07-24 18:10:33,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1791629600, current retry=0 2023-07-24 18:10:33,876 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:33,876 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, REOPEN/MOVE 2023-07-24 18:10:33,877 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222233876"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233876"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233876"}]},"ts":"1690222233876"} 2023-07-24 18:10:33,879 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:33,879 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222233879"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222233879"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222233879"}]},"ts":"1690222233879"} 2023-07-24 18:10:33,887 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=28, state=RUNNABLE; CloseRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:33,889 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:34,026 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,027 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 15a0bd689fe00dbd4c569ae65cae10df, disabling compactions & flushes 2023-07-24 18:10:34,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cdbbcb39299a3a101b22b1786703a9c7, disabling compactions & flushes 2023-07-24 18:10:34,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. after waiting 0 ms 2023-07-24 18:10:34,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. after waiting 0 ms 2023-07-24 18:10:34,028 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:34,040 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,040 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 15a0bd689fe00dbd4c569ae65cae10df: 2023-07-24 18:10:34,040 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 15a0bd689fe00dbd4c569ae65cae10df move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:34,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:34,046 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,045 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 29df854c0b07f2f49169652a88be9d34, disabling compactions & flushes 2023-07-24 18:10:34,046 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cdbbcb39299a3a101b22b1786703a9c7: 2023-07-24 18:10:34,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,047 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cdbbcb39299a3a101b22b1786703a9c7 move to jenkins-hbase4.apache.org,40159,1690222227976 record at close sequenceid=2 2023-07-24 18:10:34,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. after waiting 0 ms 2023-07-24 18:10:34,047 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,047 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=CLOSED 2023-07-24 18:10:34,047 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234047"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222234047"}]},"ts":"1690222234047"} 2023-07-24 18:10:34,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,053 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=CLOSED 2023-07-24 18:10:34,053 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,053 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234053"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222234053"}]},"ts":"1690222234053"} 2023-07-24 18:10:34,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1a8e87aba13653275f59e3df65b3f4fe, disabling compactions & flushes 2023-07-24 18:10:34,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. after waiting 0 ms 2023-07-24 18:10:34,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,059 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=28 2023-07-24 18:10:34,059 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=28, state=SUCCESS; CloseRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,46109,1690222228457 in 165 msec 2023-07-24 18:10:34,061 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:34,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=24 2023-07-24 18:10:34,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=24, state=SUCCESS; CloseRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,42261,1690222228228 in 194 msec 2023-07-24 18:10:34,064 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=24, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,40159,1690222227976; forceNewPlan=false, retain=false 2023-07-24 18:10:34,078 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:34,080 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 29df854c0b07f2f49169652a88be9d34: 2023-07-24 18:10:34,081 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 29df854c0b07f2f49169652a88be9d34 move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:34,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,084 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b92c1e9153318b2fe02f35f9efe9cc59, disabling compactions & flushes 2023-07-24 18:10:34,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. after waiting 0 ms 2023-07-24 18:10:34,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:34,087 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=CLOSED 2023-07-24 18:10:34,087 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234087"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222234087"}]},"ts":"1690222234087"} 2023-07-24 18:10:34,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,088 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1a8e87aba13653275f59e3df65b3f4fe: 2023-07-24 18:10:34,088 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1a8e87aba13653275f59e3df65b3f4fe move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:34,094 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=CLOSED 2023-07-24 18:10:34,094 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234094"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222234094"}]},"ts":"1690222234094"} 2023-07-24 18:10:34,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:34,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b92c1e9153318b2fe02f35f9efe9cc59: 2023-07-24 18:10:34,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b92c1e9153318b2fe02f35f9efe9cc59 move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:34,098 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-24 18:10:34,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; CloseRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,46109,1690222228457 in 236 msec 2023-07-24 18:10:34,103 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,104 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=CLOSED 2023-07-24 18:10:34,104 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234104"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222234104"}]},"ts":"1690222234104"} 2023-07-24 18:10:34,105 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=23, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:34,106 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-24 18:10:34,106 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,42261,1690222228228 in 209 msec 2023-07-24 18:10:34,108 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:34,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=30, resume processing ppid=25 2023-07-24 18:10:34,111 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, ppid=25, state=SUCCESS; CloseRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,46109,1690222228457 in 233 msec 2023-07-24 18:10:34,112 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=25, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:34,212 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 18:10:34,212 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:34,213 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234212"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234212"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234212"}]},"ts":"1690222234212"} 2023-07-24 18:10:34,213 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,213 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,213 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234213"}]},"ts":"1690222234213"} 2023-07-24 18:10:34,213 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,214 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,214 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234213"}]},"ts":"1690222234213"} 2023-07-24 18:10:34,214 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234213"}]},"ts":"1690222234213"} 2023-07-24 18:10:34,213 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234213"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234213"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234213"}]},"ts":"1690222234213"} 2023-07-24 18:10:34,216 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=24, state=RUNNABLE; OpenRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:34,219 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=29, state=RUNNABLE; OpenRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,223 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=25, state=RUNNABLE; OpenRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,225 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=23, state=RUNNABLE; OpenRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,227 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,373 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:34,373 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:34,374 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,374 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:34,377 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41150, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:34,377 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33708, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:34,383 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,383 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdbbcb39299a3a101b22b1786703a9c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 18:10:34,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:34,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,384 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,385 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,385 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b92c1e9153318b2fe02f35f9efe9cc59, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 18:10:34,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:34,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,386 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,386 INFO [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,391 DEBUG [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/f 2023-07-24 18:10:34,391 DEBUG [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/f 2023-07-24 18:10:34,391 INFO [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,392 INFO [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdbbcb39299a3a101b22b1786703a9c7 columnFamilyName f 2023-07-24 18:10:34,393 INFO [StoreOpener-cdbbcb39299a3a101b22b1786703a9c7-1] regionserver.HStore(310): Store=cdbbcb39299a3a101b22b1786703a9c7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:34,393 DEBUG [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/f 2023-07-24 18:10:34,395 DEBUG [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/f 2023-07-24 18:10:34,395 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,397 INFO [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b92c1e9153318b2fe02f35f9efe9cc59 columnFamilyName f 2023-07-24 18:10:34,398 INFO [StoreOpener-b92c1e9153318b2fe02f35f9efe9cc59-1] regionserver.HStore(310): Store=b92c1e9153318b2fe02f35f9efe9cc59/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:34,401 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,403 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,412 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:34,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b92c1e9153318b2fe02f35f9efe9cc59; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9851742240, jitterRate=-0.08248500525951385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:34,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b92c1e9153318b2fe02f35f9efe9cc59: 2023-07-24 18:10:34,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:34,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cdbbcb39299a3a101b22b1786703a9c7; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10144758720, jitterRate=-0.05519571900367737}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:34,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cdbbcb39299a3a101b22b1786703a9c7: 2023-07-24 18:10:34,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59., pid=35, masterSystemTime=1690222234374 2023-07-24 18:10:34,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7., pid=33, masterSystemTime=1690222234373 2023-07-24 18:10:34,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:34,434 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,434 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=25 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 29df854c0b07f2f49169652a88be9d34, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 18:10:34,434 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234434"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222234434"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222234434"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222234434"}]},"ts":"1690222234434"} 2023-07-24 18:10:34,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:34,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,440 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:34,441 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=24 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:34,442 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234441"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222234441"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222234441"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222234441"}]},"ts":"1690222234441"} 2023-07-24 18:10:34,446 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=25 2023-07-24 18:10:34,446 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=25, state=SUCCESS; OpenRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,34389,1690222232023 in 219 msec 2023-07-24 18:10:34,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=24 2023-07-24 18:10:34,450 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=24, state=SUCCESS; OpenRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,40159,1690222227976 in 229 msec 2023-07-24 18:10:34,450 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=25, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, REOPEN/MOVE in 589 msec 2023-07-24 18:10:34,455 INFO [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,458 DEBUG [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/f 2023-07-24 18:10:34,458 DEBUG [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/f 2023-07-24 18:10:34,458 INFO [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 29df854c0b07f2f49169652a88be9d34 columnFamilyName f 2023-07-24 18:10:34,459 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, REOPEN/MOVE in 597 msec 2023-07-24 18:10:34,460 INFO [StoreOpener-29df854c0b07f2f49169652a88be9d34-1] regionserver.HStore(310): Store=29df854c0b07f2f49169652a88be9d34/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:34,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,463 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:34,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 29df854c0b07f2f49169652a88be9d34; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10728403360, jitterRate=-8.395761251449585E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:34,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 29df854c0b07f2f49169652a88be9d34: 2023-07-24 18:10:34,472 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34., pid=36, masterSystemTime=1690222234374 2023-07-24 18:10:34,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,476 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:34,476 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,476 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 15a0bd689fe00dbd4c569ae65cae10df, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 18:10:34,476 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234476"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222234476"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222234476"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222234476"}]},"ts":"1690222234476"} 2023-07-24 18:10:34,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:34,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,477 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,483 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=23 2023-07-24 18:10:34,483 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=23, state=SUCCESS; OpenRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,34389,1690222232023 in 254 msec 2023-07-24 18:10:34,486 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, REOPEN/MOVE in 633 msec 2023-07-24 18:10:34,487 INFO [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,488 DEBUG [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/f 2023-07-24 18:10:34,488 DEBUG [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/f 2023-07-24 18:10:34,489 INFO [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 15a0bd689fe00dbd4c569ae65cae10df columnFamilyName f 2023-07-24 18:10:34,490 INFO [StoreOpener-15a0bd689fe00dbd4c569ae65cae10df-1] regionserver.HStore(310): Store=15a0bd689fe00dbd4c569ae65cae10df/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:34,491 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,497 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:34,499 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 15a0bd689fe00dbd4c569ae65cae10df; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11033414560, jitterRate=0.027566805481910706}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:34,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 15a0bd689fe00dbd4c569ae65cae10df: 2023-07-24 18:10:34,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df., pid=37, masterSystemTime=1690222234374 2023-07-24 18:10:34,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,505 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:34,505 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1a8e87aba13653275f59e3df65b3f4fe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 18:10:34,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:34,506 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,506 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234505"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222234505"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222234505"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222234505"}]},"ts":"1690222234505"} 2023-07-24 18:10:34,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,515 INFO [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,517 DEBUG [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/f 2023-07-24 18:10:34,517 DEBUG [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/f 2023-07-24 18:10:34,517 INFO [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1a8e87aba13653275f59e3df65b3f4fe columnFamilyName f 2023-07-24 18:10:34,518 INFO [StoreOpener-1a8e87aba13653275f59e3df65b3f4fe-1] regionserver.HStore(310): Store=1a8e87aba13653275f59e3df65b3f4fe/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:34,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-24 18:10:34,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,34389,1690222232023 in 282 msec 2023-07-24 18:10:34,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,523 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, REOPEN/MOVE in 654 msec 2023-07-24 18:10:34,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,528 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:34,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1a8e87aba13653275f59e3df65b3f4fe; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9979508000, jitterRate=-0.07058589160442352}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:34,531 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1a8e87aba13653275f59e3df65b3f4fe: 2023-07-24 18:10:34,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe., pid=34, masterSystemTime=1690222234374 2023-07-24 18:10:34,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:34,545 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,545 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234545"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222234545"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222234545"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222234545"}]},"ts":"1690222234545"} 2023-07-24 18:10:34,556 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=29 2023-07-24 18:10:34,557 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=29, state=SUCCESS; OpenRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,34389,1690222232023 in 332 msec 2023-07-24 18:10:34,559 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, REOPEN/MOVE in 684 msec 2023-07-24 18:10:34,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=23 2023-07-24 18:10:34,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1791629600. 2023-07-24 18:10:34,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:34,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:34,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:34,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:34,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:34,893 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:34,901 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:34,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:34,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=38, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:34,919 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222234919"}]},"ts":"1690222234919"} 2023-07-24 18:10:34,921 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 18:10:34,924 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 18:10:34,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 18:10:34,926 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, UNASSIGN}, {pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, UNASSIGN}, {pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, UNASSIGN}, {pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, UNASSIGN}, {pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, UNASSIGN}] 2023-07-24 18:10:34,932 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=40, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, UNASSIGN 2023-07-24 18:10:34,932 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, UNASSIGN 2023-07-24 18:10:34,932 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=39, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, UNASSIGN 2023-07-24 18:10:34,932 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, UNASSIGN 2023-07-24 18:10:34,932 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=41, ppid=38, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, UNASSIGN 2023-07-24 18:10:34,935 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:34,935 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234935"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234935"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234935"}]},"ts":"1690222234935"} 2023-07-24 18:10:34,936 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,936 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,936 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234936"}]},"ts":"1690222234936"} 2023-07-24 18:10:34,936 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234936"}]},"ts":"1690222234936"} 2023-07-24 18:10:34,936 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,936 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:34,937 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222234936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234936"}]},"ts":"1690222234936"} 2023-07-24 18:10:34,937 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222234936"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222234936"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222234936"}]},"ts":"1690222234936"} 2023-07-24 18:10:34,938 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=44, ppid=40, state=RUNNABLE; CloseRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:34,942 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=41, state=RUNNABLE; CloseRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,944 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=46, ppid=39, state=RUNNABLE; CloseRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,948 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=42, state=RUNNABLE; CloseRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:34,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=43, state=RUNNABLE; CloseRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:35,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 18:10:35,093 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:35,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cdbbcb39299a3a101b22b1786703a9c7, disabling compactions & flushes 2023-07-24 18:10:35,094 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:35,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:35,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. after waiting 0 ms 2023-07-24 18:10:35,094 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:35,100 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:35,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 15a0bd689fe00dbd4c569ae65cae10df, disabling compactions & flushes 2023-07-24 18:10:35,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:35,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:35,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. after waiting 0 ms 2023-07-24 18:10:35,101 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:35,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:35,105 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7. 2023-07-24 18:10:35,105 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cdbbcb39299a3a101b22b1786703a9c7: 2023-07-24 18:10:35,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:35,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df. 2023-07-24 18:10:35,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 15a0bd689fe00dbd4c569ae65cae10df: 2023-07-24 18:10:35,116 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:35,118 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=40 updating hbase:meta row=cdbbcb39299a3a101b22b1786703a9c7, regionState=CLOSED 2023-07-24 18:10:35,118 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222235118"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222235118"}]},"ts":"1690222235118"} 2023-07-24 18:10:35,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:35,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:35,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1a8e87aba13653275f59e3df65b3f4fe, disabling compactions & flushes 2023-07-24 18:10:35,120 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:35,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:35,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. after waiting 0 ms 2023-07-24 18:10:35,120 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:35,121 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=15a0bd689fe00dbd4c569ae65cae10df, regionState=CLOSED 2023-07-24 18:10:35,121 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222235121"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222235121"}]},"ts":"1690222235121"} 2023-07-24 18:10:35,126 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=44, resume processing ppid=40 2023-07-24 18:10:35,127 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=40, state=SUCCESS; CloseRegionProcedure cdbbcb39299a3a101b22b1786703a9c7, server=jenkins-hbase4.apache.org,40159,1690222227976 in 185 msec 2023-07-24 18:10:35,127 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=42 2023-07-24 18:10:35,128 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=42, state=SUCCESS; CloseRegionProcedure 15a0bd689fe00dbd4c569ae65cae10df, server=jenkins-hbase4.apache.org,34389,1690222232023 in 176 msec 2023-07-24 18:10:35,129 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=cdbbcb39299a3a101b22b1786703a9c7, UNASSIGN in 201 msec 2023-07-24 18:10:35,131 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=15a0bd689fe00dbd4c569ae65cae10df, UNASSIGN in 202 msec 2023-07-24 18:10:35,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:35,132 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe. 2023-07-24 18:10:35,132 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1a8e87aba13653275f59e3df65b3f4fe: 2023-07-24 18:10:35,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:35,134 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:35,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 29df854c0b07f2f49169652a88be9d34, disabling compactions & flushes 2023-07-24 18:10:35,136 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:35,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:35,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. after waiting 0 ms 2023-07-24 18:10:35,136 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:35,136 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=1a8e87aba13653275f59e3df65b3f4fe, regionState=CLOSED 2023-07-24 18:10:35,136 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222235136"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222235136"}]},"ts":"1690222235136"} 2023-07-24 18:10:35,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=43 2023-07-24 18:10:35,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=43, state=SUCCESS; CloseRegionProcedure 1a8e87aba13653275f59e3df65b3f4fe, server=jenkins-hbase4.apache.org,34389,1690222232023 in 189 msec 2023-07-24 18:10:35,150 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:35,152 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34. 2023-07-24 18:10:35,152 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 29df854c0b07f2f49169652a88be9d34: 2023-07-24 18:10:35,154 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=1a8e87aba13653275f59e3df65b3f4fe, UNASSIGN in 222 msec 2023-07-24 18:10:35,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:35,156 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:35,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b92c1e9153318b2fe02f35f9efe9cc59, disabling compactions & flushes 2023-07-24 18:10:35,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:35,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:35,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. after waiting 0 ms 2023-07-24 18:10:35,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:35,158 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=39 updating hbase:meta row=29df854c0b07f2f49169652a88be9d34, regionState=CLOSED 2023-07-24 18:10:35,158 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222235158"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222235158"}]},"ts":"1690222235158"} 2023-07-24 18:10:35,164 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=46, resume processing ppid=39 2023-07-24 18:10:35,165 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=39, state=SUCCESS; CloseRegionProcedure 29df854c0b07f2f49169652a88be9d34, server=jenkins-hbase4.apache.org,34389,1690222232023 in 216 msec 2023-07-24 18:10:35,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=29df854c0b07f2f49169652a88be9d34, UNASSIGN in 239 msec 2023-07-24 18:10:35,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:35,173 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59. 2023-07-24 18:10:35,173 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b92c1e9153318b2fe02f35f9efe9cc59: 2023-07-24 18:10:35,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:35,176 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=41 updating hbase:meta row=b92c1e9153318b2fe02f35f9efe9cc59, regionState=CLOSED 2023-07-24 18:10:35,176 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222235176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222235176"}]},"ts":"1690222235176"} 2023-07-24 18:10:35,192 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=41 2023-07-24 18:10:35,193 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; CloseRegionProcedure b92c1e9153318b2fe02f35f9efe9cc59, server=jenkins-hbase4.apache.org,34389,1690222232023 in 247 msec 2023-07-24 18:10:35,197 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=38 2023-07-24 18:10:35,197 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=38, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b92c1e9153318b2fe02f35f9efe9cc59, UNASSIGN in 267 msec 2023-07-24 18:10:35,199 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222235199"}]},"ts":"1690222235199"} 2023-07-24 18:10:35,201 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 18:10:35,204 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 18:10:35,207 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 298 msec 2023-07-24 18:10:35,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=38 2023-07-24 18:10:35,231 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 38 completed 2023-07-24 18:10:35,233 INFO [Listener at localhost/39007] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:35,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:35,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=49, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-24 18:10:35,254 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-24 18:10:35,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 18:10:35,287 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:35,291 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:35,291 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:35,291 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:35,291 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:35,299 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/recovered.edits] 2023-07-24 18:10:35,301 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/recovered.edits] 2023-07-24 18:10:35,302 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/recovered.edits] 2023-07-24 18:10:35,302 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/recovered.edits] 2023-07-24 18:10:35,303 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/recovered.edits] 2023-07-24 18:10:35,319 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe/recovered.edits/7.seqid 2023-07-24 18:10:35,321 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/1a8e87aba13653275f59e3df65b3f4fe 2023-07-24 18:10:35,322 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7/recovered.edits/7.seqid 2023-07-24 18:10:35,323 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/cdbbcb39299a3a101b22b1786703a9c7 2023-07-24 18:10:35,323 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df/recovered.edits/7.seqid 2023-07-24 18:10:35,324 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59/recovered.edits/7.seqid 2023-07-24 18:10:35,325 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/15a0bd689fe00dbd4c569ae65cae10df 2023-07-24 18:10:35,325 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b92c1e9153318b2fe02f35f9efe9cc59 2023-07-24 18:10:35,326 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34/recovered.edits/7.seqid 2023-07-24 18:10:35,328 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/29df854c0b07f2f49169652a88be9d34 2023-07-24 18:10:35,328 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 18:10:35,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 18:10:35,364 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 18:10:35,371 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 18:10:35,372 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 18:10:35,373 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222235372"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:35,373 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222235372"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:35,373 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222235372"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:35,373 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222235372"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:35,373 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222235372"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:35,377 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 18:10:35,377 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 29df854c0b07f2f49169652a88be9d34, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222232604.29df854c0b07f2f49169652a88be9d34.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => cdbbcb39299a3a101b22b1786703a9c7, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222232604.cdbbcb39299a3a101b22b1786703a9c7.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b92c1e9153318b2fe02f35f9efe9cc59, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222232604.b92c1e9153318b2fe02f35f9efe9cc59.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 15a0bd689fe00dbd4c569ae65cae10df, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222232604.15a0bd689fe00dbd4c569ae65cae10df.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 1a8e87aba13653275f59e3df65b3f4fe, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222232604.1a8e87aba13653275f59e3df65b3f4fe.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 18:10:35,377 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 18:10:35,377 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222235377"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:35,380 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 18:10:35,391 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:35,391 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:35,391 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:35,391 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:35,391 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:35,399 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 empty. 2023-07-24 18:10:35,399 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 empty. 2023-07-24 18:10:35,399 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be empty. 2023-07-24 18:10:35,400 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 empty. 2023-07-24 18:10:35,400 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 empty. 2023-07-24 18:10:35,400 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:35,401 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:35,401 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:35,401 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:35,401 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:35,401 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 18:10:35,506 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:35,513 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7ebbfcfa7bea7a83bef821a232a7a545, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:35,515 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => c009101eacf93867e6e2bc72abb744be, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:35,516 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 7840e8ef826e7cb816f46abc696d2026, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:35,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:35,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 7ebbfcfa7bea7a83bef821a232a7a545, disabling compactions & flushes 2023-07-24 18:10:35,592 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:35,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:35,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. after waiting 0 ms 2023-07-24 18:10:35,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:35,592 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:35,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 7ebbfcfa7bea7a83bef821a232a7a545: 2023-07-24 18:10:35,593 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 14c882fec391fe478ea8d7baaeb9da04, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:35,593 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 18:10:35,629 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 7840e8ef826e7cb816f46abc696d2026, disabling compactions & flushes 2023-07-24 18:10:35,630 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. after waiting 0 ms 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing c009101eacf93867e6e2bc72abb744be, disabling compactions & flushes 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:35,630 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:35,630 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 7840e8ef826e7cb816f46abc696d2026: 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. after waiting 0 ms 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:35,630 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:35,630 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for c009101eacf93867e6e2bc72abb744be: 2023-07-24 18:10:35,631 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => bfcc5e3bbd0769ae1a8bd1be7a784488, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:35,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:35,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing bfcc5e3bbd0769ae1a8bd1be7a784488, disabling compactions & flushes 2023-07-24 18:10:35,656 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:35,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:35,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. after waiting 0 ms 2023-07-24 18:10:35,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:35,656 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:35,656 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for bfcc5e3bbd0769ae1a8bd1be7a784488: 2023-07-24 18:10:35,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 18:10:36,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:36,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 14c882fec391fe478ea8d7baaeb9da04, disabling compactions & flushes 2023-07-24 18:10:36,046 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. after waiting 0 ms 2023-07-24 18:10:36,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,046 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,046 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 14c882fec391fe478ea8d7baaeb9da04: 2023-07-24 18:10:36,053 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222236052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222236052"}]},"ts":"1690222236052"} 2023-07-24 18:10:36,053 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222236052"}]},"ts":"1690222236052"} 2023-07-24 18:10:36,053 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222236052"}]},"ts":"1690222236052"} 2023-07-24 18:10:36,053 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222236052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222236052"}]},"ts":"1690222236052"} 2023-07-24 18:10:36,053 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236052"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222236052"}]},"ts":"1690222236052"} 2023-07-24 18:10:36,057 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 18:10:36,060 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222236060"}]},"ts":"1690222236060"} 2023-07-24 18:10:36,062 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-24 18:10:36,067 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:36,067 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:36,068 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:36,068 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:36,071 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, ASSIGN}, {pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, ASSIGN}, {pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, ASSIGN}, {pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, ASSIGN}, {pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, ASSIGN}] 2023-07-24 18:10:36,073 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, ASSIGN 2023-07-24 18:10:36,073 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, ASSIGN 2023-07-24 18:10:36,073 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, ASSIGN 2023-07-24 18:10:36,073 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, ASSIGN 2023-07-24 18:10:36,074 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, ASSIGN 2023-07-24 18:10:36,077 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:36,077 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40159,1690222227976; forceNewPlan=false, retain=false 2023-07-24 18:10:36,077 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=50, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:36,077 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=51, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:36,077 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=52, ppid=49, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40159,1690222227976; forceNewPlan=false, retain=false 2023-07-24 18:10:36,227 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 18:10:36,231 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=bfcc5e3bbd0769ae1a8bd1be7a784488, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:36,232 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=14c882fec391fe478ea8d7baaeb9da04, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:36,232 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222236231"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222236231"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222236231"}]},"ts":"1690222236231"} 2023-07-24 18:10:36,232 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222236232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222236232"}]},"ts":"1690222236232"} 2023-07-24 18:10:36,231 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=7840e8ef826e7cb816f46abc696d2026, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:36,232 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7ebbfcfa7bea7a83bef821a232a7a545, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:36,232 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236231"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222236231"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222236231"}]},"ts":"1690222236231"} 2023-07-24 18:10:36,232 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222236232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222236232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222236232"}]},"ts":"1690222236232"} 2023-07-24 18:10:36,232 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=c009101eacf93867e6e2bc72abb744be, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:36,233 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222236232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222236232"}]},"ts":"1690222236232"} 2023-07-24 18:10:36,234 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=55, ppid=54, state=RUNNABLE; OpenRegionProcedure bfcc5e3bbd0769ae1a8bd1be7a784488, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:36,236 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=53, state=RUNNABLE; OpenRegionProcedure 14c882fec391fe478ea8d7baaeb9da04, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:36,237 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=57, ppid=52, state=RUNNABLE; OpenRegionProcedure 7840e8ef826e7cb816f46abc696d2026, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:36,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=50, state=RUNNABLE; OpenRegionProcedure 7ebbfcfa7bea7a83bef821a232a7a545, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:36,240 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=51, state=RUNNABLE; OpenRegionProcedure c009101eacf93867e6e2bc72abb744be, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:36,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,398 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:36,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 14c882fec391fe478ea8d7baaeb9da04, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 18:10:36,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7840e8ef826e7cb816f46abc696d2026, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 18:10:36,398 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,402 INFO [StoreOpener-7840e8ef826e7cb816f46abc696d2026-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,402 INFO [StoreOpener-14c882fec391fe478ea8d7baaeb9da04-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,409 DEBUG [StoreOpener-14c882fec391fe478ea8d7baaeb9da04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/f 2023-07-24 18:10:36,409 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 18:10:36,409 DEBUG [StoreOpener-14c882fec391fe478ea8d7baaeb9da04-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/f 2023-07-24 18:10:36,411 DEBUG [StoreOpener-7840e8ef826e7cb816f46abc696d2026-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/f 2023-07-24 18:10:36,411 DEBUG [StoreOpener-7840e8ef826e7cb816f46abc696d2026-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/f 2023-07-24 18:10:36,411 INFO [StoreOpener-14c882fec391fe478ea8d7baaeb9da04-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 14c882fec391fe478ea8d7baaeb9da04 columnFamilyName f 2023-07-24 18:10:36,411 INFO [StoreOpener-7840e8ef826e7cb816f46abc696d2026-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7840e8ef826e7cb816f46abc696d2026 columnFamilyName f 2023-07-24 18:10:36,412 INFO [StoreOpener-14c882fec391fe478ea8d7baaeb9da04-1] regionserver.HStore(310): Store=14c882fec391fe478ea8d7baaeb9da04/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:36,412 INFO [StoreOpener-7840e8ef826e7cb816f46abc696d2026-1] regionserver.HStore(310): Store=7840e8ef826e7cb816f46abc696d2026/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:36,413 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:36,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:36,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:36,428 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7840e8ef826e7cb816f46abc696d2026; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10319487040, jitterRate=-0.03892287611961365}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:36,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7840e8ef826e7cb816f46abc696d2026: 2023-07-24 18:10:36,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:36,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026., pid=57, masterSystemTime=1690222236389 2023-07-24 18:10:36,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 14c882fec391fe478ea8d7baaeb9da04; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9400592320, jitterRate=-0.12450161576271057}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:36,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 14c882fec391fe478ea8d7baaeb9da04: 2023-07-24 18:10:36,430 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04., pid=56, masterSystemTime=1690222236390 2023-07-24 18:10:36,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:36,431 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:36,431 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:36,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bfcc5e3bbd0769ae1a8bd1be7a784488, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 18:10:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,433 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=52 updating hbase:meta row=7840e8ef826e7cb816f46abc696d2026, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:36,433 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236433"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222236433"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222236433"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222236433"}]},"ts":"1690222236433"} 2023-07-24 18:10:36,434 INFO [StoreOpener-bfcc5e3bbd0769ae1a8bd1be7a784488-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:36,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:36,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c009101eacf93867e6e2bc72abb744be, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 18:10:36,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:36,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,436 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=14c882fec391fe478ea8d7baaeb9da04, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:36,436 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236436"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222236436"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222236436"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222236436"}]},"ts":"1690222236436"} 2023-07-24 18:10:36,439 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=52 2023-07-24 18:10:36,439 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; OpenRegionProcedure 7840e8ef826e7cb816f46abc696d2026, server=jenkins-hbase4.apache.org,40159,1690222227976 in 198 msec 2023-07-24 18:10:36,440 DEBUG [StoreOpener-bfcc5e3bbd0769ae1a8bd1be7a784488-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/f 2023-07-24 18:10:36,440 DEBUG [StoreOpener-bfcc5e3bbd0769ae1a8bd1be7a784488-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/f 2023-07-24 18:10:36,441 INFO [StoreOpener-bfcc5e3bbd0769ae1a8bd1be7a784488-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bfcc5e3bbd0769ae1a8bd1be7a784488 columnFamilyName f 2023-07-24 18:10:36,442 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, ASSIGN in 368 msec 2023-07-24 18:10:36,442 INFO [StoreOpener-bfcc5e3bbd0769ae1a8bd1be7a784488-1] regionserver.HStore(310): Store=bfcc5e3bbd0769ae1a8bd1be7a784488/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:36,442 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=53 2023-07-24 18:10:36,442 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=53, state=SUCCESS; OpenRegionProcedure 14c882fec391fe478ea8d7baaeb9da04, server=jenkins-hbase4.apache.org,34389,1690222232023 in 203 msec 2023-07-24 18:10:36,443 INFO [StoreOpener-c009101eacf93867e6e2bc72abb744be-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, ASSIGN in 371 msec 2023-07-24 18:10:36,452 DEBUG [StoreOpener-c009101eacf93867e6e2bc72abb744be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/f 2023-07-24 18:10:36,452 DEBUG [StoreOpener-c009101eacf93867e6e2bc72abb744be-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/f 2023-07-24 18:10:36,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:36,453 INFO [StoreOpener-c009101eacf93867e6e2bc72abb744be-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c009101eacf93867e6e2bc72abb744be columnFamilyName f 2023-07-24 18:10:36,454 INFO [StoreOpener-c009101eacf93867e6e2bc72abb744be-1] regionserver.HStore(310): Store=c009101eacf93867e6e2bc72abb744be/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:36,455 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:36,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:36,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:36,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bfcc5e3bbd0769ae1a8bd1be7a784488; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11813108960, jitterRate=0.10018150508403778}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:36,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c009101eacf93867e6e2bc72abb744be; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10901136640, jitterRate=0.015247464179992676}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:36,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bfcc5e3bbd0769ae1a8bd1be7a784488: 2023-07-24 18:10:36,466 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c009101eacf93867e6e2bc72abb744be: 2023-07-24 18:10:36,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be., pid=59, masterSystemTime=1690222236390 2023-07-24 18:10:36,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488., pid=55, masterSystemTime=1690222236389 2023-07-24 18:10:36,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:36,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:36,471 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:36,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7ebbfcfa7bea7a83bef821a232a7a545, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 18:10:36,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:36,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,473 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=51 updating hbase:meta row=c009101eacf93867e6e2bc72abb744be, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:36,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:36,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:36,473 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222236473"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222236473"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222236473"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222236473"}]},"ts":"1690222236473"} 2023-07-24 18:10:36,474 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=bfcc5e3bbd0769ae1a8bd1be7a784488, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:36,474 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222236474"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222236474"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222236474"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222236474"}]},"ts":"1690222236474"} 2023-07-24 18:10:36,476 INFO [StoreOpener-7ebbfcfa7bea7a83bef821a232a7a545-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,480 DEBUG [StoreOpener-7ebbfcfa7bea7a83bef821a232a7a545-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/f 2023-07-24 18:10:36,481 DEBUG [StoreOpener-7ebbfcfa7bea7a83bef821a232a7a545-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/f 2023-07-24 18:10:36,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=51 2023-07-24 18:10:36,481 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=51, state=SUCCESS; OpenRegionProcedure c009101eacf93867e6e2bc72abb744be, server=jenkins-hbase4.apache.org,34389,1690222232023 in 236 msec 2023-07-24 18:10:36,481 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=55, resume processing ppid=54 2023-07-24 18:10:36,481 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=54, state=SUCCESS; OpenRegionProcedure bfcc5e3bbd0769ae1a8bd1be7a784488, server=jenkins-hbase4.apache.org,40159,1690222227976 in 244 msec 2023-07-24 18:10:36,481 INFO [StoreOpener-7ebbfcfa7bea7a83bef821a232a7a545-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7ebbfcfa7bea7a83bef821a232a7a545 columnFamilyName f 2023-07-24 18:10:36,483 INFO [StoreOpener-7ebbfcfa7bea7a83bef821a232a7a545-1] regionserver.HStore(310): Store=7ebbfcfa7bea7a83bef821a232a7a545/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:36,483 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, ASSIGN in 410 msec 2023-07-24 18:10:36,483 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, ASSIGN in 413 msec 2023-07-24 18:10:36,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,485 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:36,492 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:36,492 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7ebbfcfa7bea7a83bef821a232a7a545; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10727733440, jitterRate=-9.019672870635986E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:36,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7ebbfcfa7bea7a83bef821a232a7a545: 2023-07-24 18:10:36,494 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545., pid=58, masterSystemTime=1690222236390 2023-07-24 18:10:36,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:36,496 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:36,497 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=50 updating hbase:meta row=7ebbfcfa7bea7a83bef821a232a7a545, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:36,497 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222236497"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222236497"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222236497"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222236497"}]},"ts":"1690222236497"} 2023-07-24 18:10:36,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=50 2023-07-24 18:10:36,501 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=50, state=SUCCESS; OpenRegionProcedure 7ebbfcfa7bea7a83bef821a232a7a545, server=jenkins-hbase4.apache.org,34389,1690222232023 in 260 msec 2023-07-24 18:10:36,507 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=49 2023-07-24 18:10:36,507 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=49, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, ASSIGN in 433 msec 2023-07-24 18:10:36,507 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222236507"}]},"ts":"1690222236507"} 2023-07-24 18:10:36,510 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-24 18:10:36,513 DEBUG [PEWorker-4] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-24 18:10:36,515 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=49, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.2710 sec 2023-07-24 18:10:36,517 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:10:36,518 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 18:10:36,518 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:36,518 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 18:10:36,518 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:10:36,518 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 18:10:36,521 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:10:36,522 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 18:10:36,522 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 18:10:36,523 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-24 18:10:37,412 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=49 2023-07-24 18:10:37,412 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 49 completed 2023-07-24 18:10:37,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:37,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,416 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:37,417 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,417 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=60, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,422 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 18:10:37,423 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222237423"}]},"ts":"1690222237423"} 2023-07-24 18:10:37,425 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-24 18:10:37,427 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-24 18:10:37,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, UNASSIGN}, {pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, UNASSIGN}, {pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, UNASSIGN}, {pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, UNASSIGN}, {pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, UNASSIGN}] 2023-07-24 18:10:37,430 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, UNASSIGN 2023-07-24 18:10:37,431 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, UNASSIGN 2023-07-24 18:10:37,431 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=62, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, UNASSIGN 2023-07-24 18:10:37,431 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=63, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, UNASSIGN 2023-07-24 18:10:37,431 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=61, ppid=60, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, UNASSIGN 2023-07-24 18:10:37,431 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=bfcc5e3bbd0769ae1a8bd1be7a784488, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:37,432 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222237431"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222237431"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222237431"}]},"ts":"1690222237431"} 2023-07-24 18:10:37,432 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=14c882fec391fe478ea8d7baaeb9da04, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:37,432 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=c009101eacf93867e6e2bc72abb744be, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:37,433 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222237432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222237432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222237432"}]},"ts":"1690222237432"} 2023-07-24 18:10:37,432 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=7ebbfcfa7bea7a83bef821a232a7a545, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:37,433 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222237432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222237432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222237432"}]},"ts":"1690222237432"} 2023-07-24 18:10:37,433 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222237432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222237432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222237432"}]},"ts":"1690222237432"} 2023-07-24 18:10:37,432 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=7840e8ef826e7cb816f46abc696d2026, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:37,433 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222237432"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222237432"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222237432"}]},"ts":"1690222237432"} 2023-07-24 18:10:37,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=66, ppid=65, state=RUNNABLE; CloseRegionProcedure bfcc5e3bbd0769ae1a8bd1be7a784488, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:37,437 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=64, state=RUNNABLE; CloseRegionProcedure 14c882fec391fe478ea8d7baaeb9da04, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:37,438 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=68, ppid=62, state=RUNNABLE; CloseRegionProcedure c009101eacf93867e6e2bc72abb744be, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:37,440 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=61, state=RUNNABLE; CloseRegionProcedure 7ebbfcfa7bea7a83bef821a232a7a545, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:37,441 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=63, state=RUNNABLE; CloseRegionProcedure 7840e8ef826e7cb816f46abc696d2026, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:37,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 18:10:37,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:37,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:37,592 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 14c882fec391fe478ea8d7baaeb9da04, disabling compactions & flushes 2023-07-24 18:10:37,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bfcc5e3bbd0769ae1a8bd1be7a784488, disabling compactions & flushes 2023-07-24 18:10:37,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:37,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:37,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:37,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:37,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. after waiting 0 ms 2023-07-24 18:10:37,593 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. after waiting 0 ms 2023-07-24 18:10:37,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:37,594 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:37,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:37,602 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:37,604 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04. 2023-07-24 18:10:37,605 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488. 2023-07-24 18:10:37,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 14c882fec391fe478ea8d7baaeb9da04: 2023-07-24 18:10:37,605 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bfcc5e3bbd0769ae1a8bd1be7a784488: 2023-07-24 18:10:37,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:37,610 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:37,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7ebbfcfa7bea7a83bef821a232a7a545, disabling compactions & flushes 2023-07-24 18:10:37,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:37,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:37,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. after waiting 0 ms 2023-07-24 18:10:37,612 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:37,615 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=14c882fec391fe478ea8d7baaeb9da04, regionState=CLOSED 2023-07-24 18:10:37,615 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222237615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222237615"}]},"ts":"1690222237615"} 2023-07-24 18:10:37,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:37,615 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:37,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7840e8ef826e7cb816f46abc696d2026, disabling compactions & flushes 2023-07-24 18:10:37,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:37,616 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:37,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. after waiting 0 ms 2023-07-24 18:10:37,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:37,617 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=bfcc5e3bbd0769ae1a8bd1be7a784488, regionState=CLOSED 2023-07-24 18:10:37,617 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222237617"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222237617"}]},"ts":"1690222237617"} 2023-07-24 18:10:37,621 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=67, resume processing ppid=64 2023-07-24 18:10:37,621 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=64, state=SUCCESS; CloseRegionProcedure 14c882fec391fe478ea8d7baaeb9da04, server=jenkins-hbase4.apache.org,34389,1690222232023 in 181 msec 2023-07-24 18:10:37,630 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=66, resume processing ppid=65 2023-07-24 18:10:37,630 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=14c882fec391fe478ea8d7baaeb9da04, UNASSIGN in 193 msec 2023-07-24 18:10:37,630 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=65, state=SUCCESS; CloseRegionProcedure bfcc5e3bbd0769ae1a8bd1be7a784488, server=jenkins-hbase4.apache.org,40159,1690222227976 in 186 msec 2023-07-24 18:10:37,631 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:37,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545. 2023-07-24 18:10:37,634 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7ebbfcfa7bea7a83bef821a232a7a545: 2023-07-24 18:10:37,634 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=bfcc5e3bbd0769ae1a8bd1be7a784488, UNASSIGN in 202 msec 2023-07-24 18:10:37,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:37,636 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:37,637 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c009101eacf93867e6e2bc72abb744be, disabling compactions & flushes 2023-07-24 18:10:37,637 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:37,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:37,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. after waiting 0 ms 2023-07-24 18:10:37,638 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:37,638 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=61 updating hbase:meta row=7ebbfcfa7bea7a83bef821a232a7a545, regionState=CLOSED 2023-07-24 18:10:37,638 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690222237638"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222237638"}]},"ts":"1690222237638"} 2023-07-24 18:10:37,642 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:37,645 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026. 2023-07-24 18:10:37,645 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7840e8ef826e7cb816f46abc696d2026: 2023-07-24 18:10:37,647 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:37,648 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=63 updating hbase:meta row=7840e8ef826e7cb816f46abc696d2026, regionState=CLOSED 2023-07-24 18:10:37,648 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222237648"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222237648"}]},"ts":"1690222237648"} 2023-07-24 18:10:37,649 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=61 2023-07-24 18:10:37,649 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=61, state=SUCCESS; CloseRegionProcedure 7ebbfcfa7bea7a83bef821a232a7a545, server=jenkins-hbase4.apache.org,34389,1690222232023 in 201 msec 2023-07-24 18:10:37,651 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7ebbfcfa7bea7a83bef821a232a7a545, UNASSIGN in 221 msec 2023-07-24 18:10:37,651 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:37,654 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be. 2023-07-24 18:10:37,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c009101eacf93867e6e2bc72abb744be: 2023-07-24 18:10:37,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:37,658 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=62 updating hbase:meta row=c009101eacf93867e6e2bc72abb744be, regionState=CLOSED 2023-07-24 18:10:37,658 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690222237658"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222237658"}]},"ts":"1690222237658"} 2023-07-24 18:10:37,659 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=63 2023-07-24 18:10:37,660 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=63, state=SUCCESS; CloseRegionProcedure 7840e8ef826e7cb816f46abc696d2026, server=jenkins-hbase4.apache.org,40159,1690222227976 in 214 msec 2023-07-24 18:10:37,662 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7840e8ef826e7cb816f46abc696d2026, UNASSIGN in 231 msec 2023-07-24 18:10:37,675 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=68, resume processing ppid=62 2023-07-24 18:10:37,675 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=62, state=SUCCESS; CloseRegionProcedure c009101eacf93867e6e2bc72abb744be, server=jenkins-hbase4.apache.org,34389,1690222232023 in 222 msec 2023-07-24 18:10:37,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=60 2023-07-24 18:10:37,680 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=60, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c009101eacf93867e6e2bc72abb744be, UNASSIGN in 247 msec 2023-07-24 18:10:37,681 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222237681"}]},"ts":"1690222237681"} 2023-07-24 18:10:37,683 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-24 18:10:37,685 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-24 18:10:37,691 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 271 msec 2023-07-24 18:10:37,727 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=60 2023-07-24 18:10:37,728 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 60 completed 2023-07-24 18:10:37,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,747 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=71, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1791629600' 2023-07-24 18:10:37,749 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=71, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:37,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:37,754 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:37,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 18:10:37,767 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:37,767 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:37,767 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:37,767 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:37,767 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:37,773 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/recovered.edits] 2023-07-24 18:10:37,774 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/recovered.edits] 2023-07-24 18:10:37,774 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/recovered.edits] 2023-07-24 18:10:37,774 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/recovered.edits] 2023-07-24 18:10:37,775 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/recovered.edits] 2023-07-24 18:10:37,788 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545/recovered.edits/4.seqid 2023-07-24 18:10:37,788 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026/recovered.edits/4.seqid 2023-07-24 18:10:37,789 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04/recovered.edits/4.seqid 2023-07-24 18:10:37,789 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be/recovered.edits/4.seqid 2023-07-24 18:10:37,789 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7ebbfcfa7bea7a83bef821a232a7a545 2023-07-24 18:10:37,789 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7840e8ef826e7cb816f46abc696d2026 2023-07-24 18:10:37,790 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c009101eacf93867e6e2bc72abb744be 2023-07-24 18:10:37,790 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/14c882fec391fe478ea8d7baaeb9da04 2023-07-24 18:10:37,791 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488/recovered.edits/4.seqid 2023-07-24 18:10:37,791 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testTableMoveTruncateAndDrop/bfcc5e3bbd0769ae1a8bd1be7a784488 2023-07-24 18:10:37,792 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-24 18:10:37,796 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=71, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,810 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-24 18:10:37,813 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-24 18:10:37,815 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=71, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,815 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-24 18:10:37,815 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222237815"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:37,815 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222237815"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:37,815 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222237815"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:37,815 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222237815"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:37,816 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222237815"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:37,818 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 18:10:37,819 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 7ebbfcfa7bea7a83bef821a232a7a545, NAME => 'Group_testTableMoveTruncateAndDrop,,1690222235330.7ebbfcfa7bea7a83bef821a232a7a545.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => c009101eacf93867e6e2bc72abb744be, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690222235330.c009101eacf93867e6e2bc72abb744be.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 7840e8ef826e7cb816f46abc696d2026, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690222235330.7840e8ef826e7cb816f46abc696d2026.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 14c882fec391fe478ea8d7baaeb9da04, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690222235330.14c882fec391fe478ea8d7baaeb9da04.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => bfcc5e3bbd0769ae1a8bd1be7a784488, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690222235330.bfcc5e3bbd0769ae1a8bd1be7a784488.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 18:10:37,819 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-24 18:10:37,819 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222237819"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:37,821 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-24 18:10:37,823 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=71, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-24 18:10:37,825 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=71, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 87 msec 2023-07-24 18:10:37,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=71 2023-07-24 18:10:37,866 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 71 completed 2023-07-24 18:10:37,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:37,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:37,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:37,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:37,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:37,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:37,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup default 2023-07-24 18:10:37,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:37,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:37,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:37,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1791629600, current retry=0 2023-07-24 18:10:37,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1791629600 => default 2023-07-24 18:10:37,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:37,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1791629600 2023-07-24 18:10:37,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:37,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:37,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:37,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:37,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:37,905 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:37,905 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:37,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:37,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:37,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:37,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:37,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:37,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:37,918 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:37,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:37,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:37,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:37,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:37,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:37,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:37,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:37,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:37,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:37,933 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 148 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223437932, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:37,933 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:37,935 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:37,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:37,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:37,937 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:37,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:37,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:37,967 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=490 (was 420) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_720430503_17 at /127.0.0.1:40398 [Receiving block BP-802604675-172.31.14.131-1690222222036:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:44625 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_720430503_17 at /127.0.0.1:40862 [Receiving block BP-802604675-172.31.14.131-1690222222036:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f-prefix:jenkins-hbase4.apache.org,34389,1690222232023 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51807@0x13db7840-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:34389-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-802604675-172.31.14.131-1690222222036:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-802604675-172.31.14.131-1690222222036:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:34389 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34389Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_720430503_17 at /127.0.0.1:51800 [Receiving block BP-802604675-172.31.14.131-1690222222036:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RSProcedureDispatcher-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-802604675-172.31.14.131-1690222222036:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_600996078_17 at /127.0.0.1:40530 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51807@0x13db7840 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:44625 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1614930635-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-7443e145-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-635-acceptor-0@4e0ba007-ServerConnector@157f8446{HTTP/1.1, (http/1.1)}{0.0.0.0:43329} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_720430503_17 at /127.0.0.1:40984 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1614930635-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51807@0x13db7840-SendThread(127.0.0.1:51807) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34389 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=757 (was 671) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=540 (was 534) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=5999 (was 6564) 2023-07-24 18:10:37,989 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=490, OpenFileDescriptor=757, MaxFileDescriptor=60000, SystemLoadAverage=540, ProcessCount=177, AvailableMemoryMB=5998 2023-07-24 18:10:37,989 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-24 18:10:37,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:37,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:37,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:37,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:37,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:37,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:37,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:38,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:38,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:38,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:38,010 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:38,011 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:38,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:38,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:38,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:38,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:38,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 176 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223438029, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:38,030 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:38,032 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:38,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,033 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:38,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:38,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:38,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-24 18:10:38,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:38,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 182 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:42402 deadline: 1690223438037, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 18:10:38,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-24 18:10:38,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:38,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 184 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:42402 deadline: 1690223438038, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 18:10:38,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-24 18:10:38,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:38,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 186 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:42402 deadline: 1690223438040, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-24 18:10:38,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-24 18:10:38,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-24 18:10:38,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:38,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:38,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:38,061 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:38,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:38,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:38,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:38,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-24 18:10:38,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:38,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:38,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:38,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:38,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:38,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:38,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:38,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:38,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,078 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:38,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:38,083 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:38,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:38,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,100 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:38,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:38,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:38,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:38,118 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 220 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223438118, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:38,119 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:38,120 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:38,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,122 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:38,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:38,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:38,139 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=493 (was 490) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=757 (was 757), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=540 (was 540), ProcessCount=177 (was 177), AvailableMemoryMB=5972 (was 5998) 2023-07-24 18:10:38,183 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=493, OpenFileDescriptor=757, MaxFileDescriptor=60000, SystemLoadAverage=540, ProcessCount=177, AvailableMemoryMB=5953 2023-07-24 18:10:38,183 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-24 18:10:38,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:38,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:38,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:38,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:38,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:38,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:38,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:38,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:38,213 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:38,214 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:38,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:38,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:38,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:38,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:38,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 248 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223438232, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:38,233 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:38,235 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:38,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,236 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:38,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:38,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:38,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:38,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:38,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-24 18:10:38,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 18:10:38,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:38,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:38,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:38,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:38,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup bar 2023-07-24 18:10:38,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:38,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 18:10:38,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:38,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:38,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(238): Moving server region f78657a0e379a4435cf47a889f576b52, which do not belong to RSGroup bar 2023-07-24 18:10:38,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, REOPEN/MOVE 2023-07-24 18:10:38,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(238): Moving server region 7a7f564afa8892e109c3421f089102f9, which do not belong to RSGroup bar 2023-07-24 18:10:38,284 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, REOPEN/MOVE 2023-07-24 18:10:38,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, REOPEN/MOVE 2023-07-24 18:10:38,288 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f78657a0e379a4435cf47a889f576b52, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:38,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-24 18:10:38,289 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, REOPEN/MOVE 2023-07-24 18:10:38,289 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222238287"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222238287"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222238287"}]},"ts":"1690222238287"} 2023-07-24 18:10:38,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 18:10:38,290 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=7a7f564afa8892e109c3421f089102f9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:38,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 3 region(s) to group default, current retry=0 2023-07-24 18:10:38,291 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-24 18:10:38,290 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222238290"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222238290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222238290"}]},"ts":"1690222238290"} 2023-07-24 18:10:38,292 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,42261,1690222228228, state=CLOSING 2023-07-24 18:10:38,292 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:38,293 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:38,293 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:38,293 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:38,294 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=74, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:38,295 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=76, ppid=73, state=RUNNABLE; CloseRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:38,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:38,446 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-24 18:10:38,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f78657a0e379a4435cf47a889f576b52, disabling compactions & flushes 2023-07-24 18:10:38,447 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:38,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:38,447 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:38,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:38,447 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:38,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. after waiting 0 ms 2023-07-24 18:10:38,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:38,448 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:38,448 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:38,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=40.81 KB heapSize=63.08 KB 2023-07-24 18:10:38,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f78657a0e379a4435cf47a889f576b52 1/1 column families, dataSize=6.37 KB heapSize=10.52 KB 2023-07-24 18:10:38,593 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.37 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/.tmp/m/3d73903be8534798974d44e4ce3d086e 2023-07-24 18:10:38,595 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=37.75 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/info/fdd28ddb4cca467eb2aef9de73cceaf4 2023-07-24 18:10:38,634 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d73903be8534798974d44e4ce3d086e 2023-07-24 18:10:38,634 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fdd28ddb4cca467eb2aef9de73cceaf4 2023-07-24 18:10:38,654 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/.tmp/m/3d73903be8534798974d44e4ce3d086e as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m/3d73903be8534798974d44e4ce3d086e 2023-07-24 18:10:38,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d73903be8534798974d44e4ce3d086e 2023-07-24 18:10:38,664 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m/3d73903be8534798974d44e4ce3d086e, entries=9, sequenceid=26, filesize=5.5 K 2023-07-24 18:10:38,669 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.37 KB/6527, heapSize ~10.50 KB/10752, currentSize=0 B/0 for f78657a0e379a4435cf47a889f576b52 in 220ms, sequenceid=26, compaction requested=false 2023-07-24 18:10:38,676 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/rep_barrier/5294f6aa28af4e028f00869216ed29db 2023-07-24 18:10:38,681 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 18:10:38,682 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:38,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:38,683 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f78657a0e379a4435cf47a889f576b52: 2023-07-24 18:10:38,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f78657a0e379a4435cf47a889f576b52 move to jenkins-hbase4.apache.org,46109,1690222228457 record at close sequenceid=26 2023-07-24 18:10:38,685 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5294f6aa28af4e028f00869216ed29db 2023-07-24 18:10:38,686 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=75, ppid=72, state=RUNNABLE; CloseRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:38,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:38,700 WARN [DataStreamer for file /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/table/5552857b1486493aa9f63cd7f641389e] hdfs.DataStreamer(982): Caught exception java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1257) at java.lang.Thread.join(Thread.java:1331) at org.apache.hadoop.hdfs.DataStreamer.closeResponder(DataStreamer.java:980) at org.apache.hadoop.hdfs.DataStreamer.endBlock(DataStreamer.java:630) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:807) 2023-07-24 18:10:38,703 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.91 KB at sequenceid=92 (bloomFilter=false), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/table/5552857b1486493aa9f63cd7f641389e 2023-07-24 18:10:38,709 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5552857b1486493aa9f63cd7f641389e 2023-07-24 18:10:38,710 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/info/fdd28ddb4cca467eb2aef9de73cceaf4 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info/fdd28ddb4cca467eb2aef9de73cceaf4 2023-07-24 18:10:38,718 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fdd28ddb4cca467eb2aef9de73cceaf4 2023-07-24 18:10:38,718 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info/fdd28ddb4cca467eb2aef9de73cceaf4, entries=42, sequenceid=92, filesize=9.7 K 2023-07-24 18:10:38,720 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/rep_barrier/5294f6aa28af4e028f00869216ed29db as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier/5294f6aa28af4e028f00869216ed29db 2023-07-24 18:10:38,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5294f6aa28af4e028f00869216ed29db 2023-07-24 18:10:38,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier/5294f6aa28af4e028f00869216ed29db, entries=10, sequenceid=92, filesize=6.1 K 2023-07-24 18:10:38,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/table/5552857b1486493aa9f63cd7f641389e as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table/5552857b1486493aa9f63cd7f641389e 2023-07-24 18:10:38,735 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5552857b1486493aa9f63cd7f641389e 2023-07-24 18:10:38,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table/5552857b1486493aa9f63cd7f641389e, entries=15, sequenceid=92, filesize=6.2 K 2023-07-24 18:10:38,737 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~40.81 KB/41791, heapSize ~63.03 KB/64544, currentSize=0 B/0 for 1588230740 in 288ms, sequenceid=92, compaction requested=false 2023-07-24 18:10:38,748 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/recovered.edits/95.seqid, newMaxSeqId=95, maxSeqId=1 2023-07-24 18:10:38,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:38,749 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:38,749 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:38,749 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,46109,1690222228457 record at close sequenceid=92 2023-07-24 18:10:38,751 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-24 18:10:38,752 WARN [PEWorker-5] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-24 18:10:38,754 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=74 2023-07-24 18:10:38,755 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=74, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,42261,1690222228228 in 458 msec 2023-07-24 18:10:38,755 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=74, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:38,906 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46109,1690222228457, state=OPENING 2023-07-24 18:10:38,908 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:38,908 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=78, ppid=74, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:38,908 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:39,065 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:10:39,065 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:39,067 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46109%2C1690222228457.meta, suffix=.meta, logDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,46109,1690222228457, archiveDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs, maxLogs=32 2023-07-24 18:10:39,087 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK] 2023-07-24 18:10:39,087 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK] 2023-07-24 18:10:39,089 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK] 2023-07-24 18:10:39,092 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/WALs/jenkins-hbase4.apache.org,46109,1690222228457/jenkins-hbase4.apache.org%2C46109%2C1690222228457.meta.1690222239069.meta 2023-07-24 18:10:39,094 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41213,DS-983aedaf-bcc4-46c7-9dc5-65773cb2618c,DISK], DatanodeInfoWithStorage[127.0.0.1:34623,DS-cbeb2446-245e-4c39-86f7-ee43beeea239,DISK], DatanodeInfoWithStorage[127.0.0.1:36767,DS-2361d440-9ecc-4ffc-8670-240b554a18c1,DISK]] 2023-07-24 18:10:39,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:39,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:39,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:10:39,095 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:10:39,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:10:39,095 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:39,096 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:10:39,096 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:10:39,098 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:39,099 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info 2023-07-24 18:10:39,099 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info 2023-07-24 18:10:39,099 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:39,111 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for fdd28ddb4cca467eb2aef9de73cceaf4 2023-07-24 18:10:39,111 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info/fdd28ddb4cca467eb2aef9de73cceaf4 2023-07-24 18:10:39,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:39,112 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:39,113 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:39,113 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:39,114 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:39,124 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5294f6aa28af4e028f00869216ed29db 2023-07-24 18:10:39,124 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier/5294f6aa28af4e028f00869216ed29db 2023-07-24 18:10:39,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:39,124 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:39,125 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table 2023-07-24 18:10:39,125 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table 2023-07-24 18:10:39,126 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:39,133 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5552857b1486493aa9f63cd7f641389e 2023-07-24 18:10:39,133 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table/5552857b1486493aa9f63cd7f641389e 2023-07-24 18:10:39,133 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:39,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740 2023-07-24 18:10:39,135 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740 2023-07-24 18:10:39,138 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:39,140 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:39,141 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=96; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10613484000, jitterRate=-0.011542275547981262}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:39,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:39,142 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=78, masterSystemTime=1690222239060 2023-07-24 18:10:39,143 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:10:39,143 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:10:39,144 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46109,1690222228457, state=OPEN 2023-07-24 18:10:39,145 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:39,145 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:39,146 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f78657a0e379a4435cf47a889f576b52, regionState=CLOSED 2023-07-24 18:10:39,146 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222239146"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222239146"}]},"ts":"1690222239146"} 2023-07-24 18:10:39,147 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42261] ipc.CallRunner(144): callId: 177 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:34722 deadline: 1690222299147, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46109 startCode=1690222228457. As of locationSeqNum=92. 2023-07-24 18:10:39,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=78, resume processing ppid=74 2023-07-24 18:10:39,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=78, ppid=74, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46109,1690222228457 in 237 msec 2023-07-24 18:10:39,149 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 859 msec 2023-07-24 18:10:39,248 DEBUG [PEWorker-5] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:39,250 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42198, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:39,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=72 2023-07-24 18:10:39,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=72, state=SUCCESS; CloseRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,42261,1690222228228 in 960 msec 2023-07-24 18:10:39,255 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=72, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:39,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=72 2023-07-24 18:10:39,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a7f564afa8892e109c3421f089102f9, disabling compactions & flushes 2023-07-24 18:10:39,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. after waiting 0 ms 2023-07-24 18:10:39,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,298 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 7a7f564afa8892e109c3421f089102f9 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-24 18:10:39,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/.tmp/info/c70b749993e04d95b6a1ebe202d26092 2023-07-24 18:10:39,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/.tmp/info/c70b749993e04d95b6a1ebe202d26092 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info/c70b749993e04d95b6a1ebe202d26092 2023-07-24 18:10:39,330 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info/c70b749993e04d95b6a1ebe202d26092, entries=2, sequenceid=6, filesize=4.8 K 2023-07-24 18:10:39,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 7a7f564afa8892e109c3421f089102f9 in 33ms, sequenceid=6, compaction requested=false 2023-07-24 18:10:39,337 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-24 18:10:39,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a7f564afa8892e109c3421f089102f9: 2023-07-24 18:10:39,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7a7f564afa8892e109c3421f089102f9 move to jenkins-hbase4.apache.org,46109,1690222228457 record at close sequenceid=6 2023-07-24 18:10:39,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,340 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=7a7f564afa8892e109c3421f089102f9, regionState=CLOSED 2023-07-24 18:10:39,341 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222239340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222239340"}]},"ts":"1690222239340"} 2023-07-24 18:10:39,344 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=73 2023-07-24 18:10:39,345 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=73, state=SUCCESS; CloseRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,42261,1690222228228 in 1.0490 sec 2023-07-24 18:10:39,345 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=73, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:39,345 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f78657a0e379a4435cf47a889f576b52, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:39,346 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222239345"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222239345"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222239345"}]},"ts":"1690222239345"} 2023-07-24 18:10:39,346 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=7a7f564afa8892e109c3421f089102f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:39,346 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222239346"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222239346"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222239346"}]},"ts":"1690222239346"} 2023-07-24 18:10:39,349 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=72, state=RUNNABLE; OpenRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:39,350 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=73, state=RUNNABLE; OpenRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:39,505 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:39,505 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f78657a0e379a4435cf47a889f576b52, NAME => 'hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:39,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:39,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. service=MultiRowMutationService 2023-07-24 18:10:39,506 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:10:39,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:39,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,508 INFO [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,509 DEBUG [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m 2023-07-24 18:10:39,509 DEBUG [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m 2023-07-24 18:10:39,510 INFO [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f78657a0e379a4435cf47a889f576b52 columnFamilyName m 2023-07-24 18:10:39,517 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3d73903be8534798974d44e4ce3d086e 2023-07-24 18:10:39,517 DEBUG [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] regionserver.HStore(539): loaded hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m/3d73903be8534798974d44e4ce3d086e 2023-07-24 18:10:39,517 INFO [StoreOpener-f78657a0e379a4435cf47a889f576b52-1] regionserver.HStore(310): Store=f78657a0e379a4435cf47a889f576b52/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:39,518 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,519 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:39,523 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f78657a0e379a4435cf47a889f576b52; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2a39f269, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:39,523 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f78657a0e379a4435cf47a889f576b52: 2023-07-24 18:10:39,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52., pid=79, masterSystemTime=1690222239501 2023-07-24 18:10:39,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:39,525 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:39,525 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7a7f564afa8892e109c3421f089102f9, NAME => 'hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:39,526 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=72 updating hbase:meta row=f78657a0e379a4435cf47a889f576b52, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:39,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:39,526 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222239526"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222239526"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222239526"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222239526"}]},"ts":"1690222239526"} 2023-07-24 18:10:39,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,528 INFO [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,529 DEBUG [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info 2023-07-24 18:10:39,529 DEBUG [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info 2023-07-24 18:10:39,529 INFO [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7a7f564afa8892e109c3421f089102f9 columnFamilyName info 2023-07-24 18:10:39,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=72 2023-07-24 18:10:39,530 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=72, state=SUCCESS; OpenRegionProcedure f78657a0e379a4435cf47a889f576b52, server=jenkins-hbase4.apache.org,46109,1690222228457 in 180 msec 2023-07-24 18:10:39,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=72, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f78657a0e379a4435cf47a889f576b52, REOPEN/MOVE in 1.2480 sec 2023-07-24 18:10:39,537 DEBUG [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] regionserver.HStore(539): loaded hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/info/c70b749993e04d95b6a1ebe202d26092 2023-07-24 18:10:39,538 INFO [StoreOpener-7a7f564afa8892e109c3421f089102f9-1] regionserver.HStore(310): Store=7a7f564afa8892e109c3421f089102f9/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:39,538 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,540 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,543 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:39,544 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7a7f564afa8892e109c3421f089102f9; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11585441120, jitterRate=0.0789782851934433}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:39,544 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7a7f564afa8892e109c3421f089102f9: 2023-07-24 18:10:39,545 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9., pid=80, masterSystemTime=1690222239501 2023-07-24 18:10:39,547 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,547 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:39,548 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=73 updating hbase:meta row=7a7f564afa8892e109c3421f089102f9, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:39,548 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222239547"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222239547"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222239547"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222239547"}]},"ts":"1690222239547"} 2023-07-24 18:10:39,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=73 2023-07-24 18:10:39,553 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=73, state=SUCCESS; OpenRegionProcedure 7a7f564afa8892e109c3421f089102f9, server=jenkins-hbase4.apache.org,46109,1690222228457 in 199 msec 2023-07-24 18:10:39,555 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=73, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=7a7f564afa8892e109c3421f089102f9, REOPEN/MOVE in 1.2700 sec 2023-07-24 18:10:40,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976, jenkins-hbase4.apache.org,42261,1690222228228] are moved back to default 2023-07-24 18:10:40,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-24 18:10:40,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:40,293 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42261] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:34756 deadline: 1690222300293, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46109 startCode=1690222228457. As of locationSeqNum=26. 2023-07-24 18:10:40,396 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42261] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34756 deadline: 1690222300396, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46109 startCode=1690222228457. As of locationSeqNum=92. 2023-07-24 18:10:40,499 DEBUG [hconnection-0x2f235fbd-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:40,506 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48314, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:40,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:40,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:40,534 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 18:10:40,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:40,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:40,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:40,541 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:40,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-24 18:10:40,542 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42261] ipc.CallRunner(144): callId: 186 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:34722 deadline: 1690222300542, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46109 startCode=1690222228457. As of locationSeqNum=26. 2023-07-24 18:10:40,542 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 18:10:40,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 18:10:40,648 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:40,649 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 18:10:40,649 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:40,650 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:40,652 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:40,654 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:40,655 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 empty. 2023-07-24 18:10:40,656 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:40,656 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 18:10:40,688 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:40,690 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 823faf5d0debaca3ba57e04c27bdaa67, NAME => 'Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:40,722 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:40,722 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 823faf5d0debaca3ba57e04c27bdaa67, disabling compactions & flushes 2023-07-24 18:10:40,722 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:40,722 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:40,722 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. after waiting 0 ms 2023-07-24 18:10:40,722 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:40,722 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:40,722 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:40,725 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:40,726 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222240726"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222240726"}]},"ts":"1690222240726"} 2023-07-24 18:10:40,728 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:40,729 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:40,729 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222240729"}]},"ts":"1690222240729"} 2023-07-24 18:10:40,730 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-24 18:10:40,741 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, ASSIGN}] 2023-07-24 18:10:40,743 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, ASSIGN 2023-07-24 18:10:40,744 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:40,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 18:10:40,895 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:40,895 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222240895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222240895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222240895"}]},"ts":"1690222240895"} 2023-07-24 18:10:40,897 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:41,053 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 823faf5d0debaca3ba57e04c27bdaa67, NAME => 'Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:41,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:41,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,053 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,055 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,057 DEBUG [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f 2023-07-24 18:10:41,057 DEBUG [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f 2023-07-24 18:10:41,057 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 823faf5d0debaca3ba57e04c27bdaa67 columnFamilyName f 2023-07-24 18:10:41,058 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] regionserver.HStore(310): Store=823faf5d0debaca3ba57e04c27bdaa67/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:41,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:41,066 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 823faf5d0debaca3ba57e04c27bdaa67; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11815265440, jitterRate=0.10038234293460846}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:41,066 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:41,067 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67., pid=83, masterSystemTime=1690222241049 2023-07-24 18:10:41,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,068 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,069 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:41,069 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222241069"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222241069"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222241069"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222241069"}]},"ts":"1690222241069"} 2023-07-24 18:10:41,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-24 18:10:41,072 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457 in 173 msec 2023-07-24 18:10:41,074 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-24 18:10:41,074 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, ASSIGN in 331 msec 2023-07-24 18:10:41,075 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:41,075 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222241075"}]},"ts":"1690222241075"} 2023-07-24 18:10:41,076 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-24 18:10:41,079 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:41,080 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 542 msec 2023-07-24 18:10:41,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-24 18:10:41,148 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-24 18:10:41,148 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-24 18:10:41,149 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:41,150 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42261] ipc.CallRunner(144): callId: 277 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:34752 deadline: 1690222301149, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46109 startCode=1690222228457. As of locationSeqNum=92. 2023-07-24 18:10:41,252 DEBUG [hconnection-0x6043b73e-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:41,255 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:41,264 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-24 18:10:41,264 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:41,264 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-24 18:10:41,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-24 18:10:41,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:41,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 18:10:41,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:41,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:41,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-24 18:10:41,273 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region 823faf5d0debaca3ba57e04c27bdaa67 to RSGroup bar 2023-07-24 18:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:10:41,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:41,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE 2023-07-24 18:10:41,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-24 18:10:41,276 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE 2023-07-24 18:10:41,277 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:41,277 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222241277"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222241277"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222241277"}]},"ts":"1690222241277"} 2023-07-24 18:10:41,279 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:41,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 823faf5d0debaca3ba57e04c27bdaa67, disabling compactions & flushes 2023-07-24 18:10:41,436 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. after waiting 0 ms 2023-07-24 18:10:41,436 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:41,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:41,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 823faf5d0debaca3ba57e04c27bdaa67 move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:41,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,445 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=CLOSED 2023-07-24 18:10:41,445 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222241445"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222241445"}]},"ts":"1690222241445"} 2023-07-24 18:10:41,453 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-24 18:10:41,453 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457 in 168 msec 2023-07-24 18:10:41,454 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:41,605 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:41,605 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:41,605 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222241605"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222241605"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222241605"}]},"ts":"1690222241605"} 2023-07-24 18:10:41,608 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:41,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 823faf5d0debaca3ba57e04c27bdaa67, NAME => 'Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:41,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:41,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,766 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,767 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,769 DEBUG [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f 2023-07-24 18:10:41,769 DEBUG [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f 2023-07-24 18:10:41,769 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 823faf5d0debaca3ba57e04c27bdaa67 columnFamilyName f 2023-07-24 18:10:41,770 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] regionserver.HStore(310): Store=823faf5d0debaca3ba57e04c27bdaa67/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:41,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:41,777 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 823faf5d0debaca3ba57e04c27bdaa67; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10043223840, jitterRate=-0.06465189158916473}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:41,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:41,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67., pid=86, masterSystemTime=1690222241761 2023-07-24 18:10:41,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:41,781 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:41,782 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222241781"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222241781"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222241781"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222241781"}]},"ts":"1690222241781"} 2023-07-24 18:10:41,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-24 18:10:41,787 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,34389,1690222232023 in 177 msec 2023-07-24 18:10:41,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE in 513 msec 2023-07-24 18:10:41,901 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:42,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-24 18:10:42,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-24 18:10:42,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:42,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:42,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:42,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-24 18:10:42,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:42,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 18:10:42,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:42,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:42402 deadline: 1690223442284, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-24 18:10:42,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup default 2023-07-24 18:10:42,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:42,286 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 289 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:42402 deadline: 1690223442286, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-24 18:10:42,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-24 18:10:42,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:42,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 18:10:42,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:42,296 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:42,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-24 18:10:42,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region 823faf5d0debaca3ba57e04c27bdaa67 to RSGroup default 2023-07-24 18:10:42,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE 2023-07-24 18:10:42,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 18:10:42,300 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE 2023-07-24 18:10:42,301 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:42,301 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222242301"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222242301"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222242301"}]},"ts":"1690222242301"} 2023-07-24 18:10:42,306 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:42,459 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 823faf5d0debaca3ba57e04c27bdaa67, disabling compactions & flushes 2023-07-24 18:10:42,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. after waiting 0 ms 2023-07-24 18:10:42,463 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,467 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:42,468 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:42,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 823faf5d0debaca3ba57e04c27bdaa67 move to jenkins-hbase4.apache.org,46109,1690222228457 record at close sequenceid=5 2023-07-24 18:10:42,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,471 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=CLOSED 2023-07-24 18:10:42,471 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222242471"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222242471"}]},"ts":"1690222242471"} 2023-07-24 18:10:42,475 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-24 18:10:42,475 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,34389,1690222232023 in 170 msec 2023-07-24 18:10:42,476 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:42,627 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:42,627 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222242627"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222242627"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222242627"}]},"ts":"1690222242627"} 2023-07-24 18:10:42,629 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:42,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 823faf5d0debaca3ba57e04c27bdaa67, NAME => 'Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:42,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:42,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,786 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,787 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,788 DEBUG [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f 2023-07-24 18:10:42,788 DEBUG [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f 2023-07-24 18:10:42,789 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 823faf5d0debaca3ba57e04c27bdaa67 columnFamilyName f 2023-07-24 18:10:42,789 INFO [StoreOpener-823faf5d0debaca3ba57e04c27bdaa67-1] regionserver.HStore(310): Store=823faf5d0debaca3ba57e04c27bdaa67/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:42,790 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,792 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:42,796 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 823faf5d0debaca3ba57e04c27bdaa67; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10422822240, jitterRate=-0.029299035668373108}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:42,796 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:42,797 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67., pid=89, masterSystemTime=1690222242781 2023-07-24 18:10:42,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,799 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:42,799 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:42,800 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222242799"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222242799"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222242799"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222242799"}]},"ts":"1690222242799"} 2023-07-24 18:10:42,803 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-24 18:10:42,803 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457 in 172 msec 2023-07-24 18:10:42,805 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, REOPEN/MOVE in 505 msec 2023-07-24 18:10:43,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-24 18:10:43,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-24 18:10:43,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:43,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 18:10:43,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:43,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 296 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:42402 deadline: 1690223443309, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-24 18:10:43,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup default 2023-07-24 18:10:43,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-24 18:10:43,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,314 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:43,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-24 18:10:43,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976, jenkins-hbase4.apache.org,42261,1690222228228] are moved back to bar 2023-07-24 18:10:43,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-24 18:10:43,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:43,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-24 18:10:43,328 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42261] ipc.CallRunner(144): callId: 211 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:34722 deadline: 1690222303328, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46109 startCode=1690222228457. As of locationSeqNum=6. 2023-07-24 18:10:43,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:43,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:43,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,451 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-24 18:10:43,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-24 18:10:43,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 18:10:43,456 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222243455"}]},"ts":"1690222243455"} 2023-07-24 18:10:43,457 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-24 18:10:43,460 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-24 18:10:43,461 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, UNASSIGN}] 2023-07-24 18:10:43,465 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, UNASSIGN 2023-07-24 18:10:43,466 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:43,466 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222243465"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222243465"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222243465"}]},"ts":"1690222243465"} 2023-07-24 18:10:43,467 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:43,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 18:10:43,621 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:43,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 823faf5d0debaca3ba57e04c27bdaa67, disabling compactions & flushes 2023-07-24 18:10:43,622 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:43,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:43,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. after waiting 0 ms 2023-07-24 18:10:43,622 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:43,627 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 18:10:43,628 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67. 2023-07-24 18:10:43,628 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 823faf5d0debaca3ba57e04c27bdaa67: 2023-07-24 18:10:43,629 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:43,630 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=823faf5d0debaca3ba57e04c27bdaa67, regionState=CLOSED 2023-07-24 18:10:43,630 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690222243630"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222243630"}]},"ts":"1690222243630"} 2023-07-24 18:10:43,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-24 18:10:43,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 823faf5d0debaca3ba57e04c27bdaa67, server=jenkins-hbase4.apache.org,46109,1690222228457 in 164 msec 2023-07-24 18:10:43,635 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-24 18:10:43,635 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=823faf5d0debaca3ba57e04c27bdaa67, UNASSIGN in 172 msec 2023-07-24 18:10:43,636 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222243636"}]},"ts":"1690222243636"} 2023-07-24 18:10:43,637 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-24 18:10:43,639 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-24 18:10:43,641 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-24 18:10:43,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-24 18:10:43,758 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-24 18:10:43,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-24 18:10:43,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,761 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-24 18:10:43,762 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,764 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:43,766 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:43,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-24 18:10:43,768 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits] 2023-07-24 18:10:43,773 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits/10.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67/recovered.edits/10.seqid 2023-07-24 18:10:43,774 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testFailRemoveGroup/823faf5d0debaca3ba57e04c27bdaa67 2023-07-24 18:10:43,774 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-24 18:10:43,776 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,779 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-24 18:10:43,781 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-24 18:10:43,782 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,783 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-24 18:10:43,783 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222243783"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:43,785 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:43,785 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 823faf5d0debaca3ba57e04c27bdaa67, NAME => 'Group_testFailRemoveGroup,,1690222240537.823faf5d0debaca3ba57e04c27bdaa67.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:43,785 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-24 18:10:43,785 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222243785"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:43,786 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-24 18:10:43,789 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-24 18:10:43,790 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 30 msec 2023-07-24 18:10:43,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-24 18:10:43,869 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-24 18:10:43,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,872 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:43,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:43,873 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:43,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:43,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:43,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:43,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:43,888 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:43,891 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:43,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:43,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:43,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:43,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,904 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:43,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:43,907 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 344 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223443907, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:43,908 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:43,910 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:43,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,911 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:43,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:43,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:43,938 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=512 (was 493) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6043b73e-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-111891419_17 at /127.0.0.1:51972 [Receiving block BP-802604675-172.31.14.131-1690222222036:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-802604675-172.31.14.131-1690222222036:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-111891419_17 at /127.0.0.1:44006 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-111891419_17 at /127.0.0.1:40568 [Receiving block BP-802604675-172.31.14.131-1690222222036:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_720430503_17 at /127.0.0.1:51966 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-111891419_17 at /127.0.0.1:41038 [Receiving block BP-802604675-172.31.14.131-1690222222036:blk_1073741857_1033] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_826727170_17 at /127.0.0.1:36072 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-802604675-172.31.14.131-1690222222036:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f-prefix:jenkins-hbase4.apache.org,46109,1690222228457.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-802604675-172.31.14.131-1690222222036:blk_1073741857_1033, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=791 (was 757) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=599 (was 540) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=5678 (was 5953) 2023-07-24 18:10:43,941 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 18:10:43,960 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=599, ProcessCount=177, AvailableMemoryMB=5677 2023-07-24 18:10:43,960 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 18:10:43,961 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-24 18:10:43,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:43,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:43,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:43,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:43,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:43,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:43,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:43,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:43,979 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:43,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:43,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:43,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:43,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:43,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:43,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:43,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:43,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 372 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223443992, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:43,993 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:43,997 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:43,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:43,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:43,999 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:44,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:44,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:44,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:44,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:44,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_988431236 2023-07-24 18:10:44,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:44,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:44,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:44,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:44,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:44,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:44,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:44,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389] to rsgroup Group_testMultiTableMove_988431236 2023-07-24 18:10:44,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:44,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:44,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:44,021 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:44,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:44,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023] are moved back to default 2023-07-24 18:10:44,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_988431236 2023-07-24 18:10:44,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:44,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:44,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:44,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_988431236 2023-07-24 18:10:44,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:44,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:44,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:44,038 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:44,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-24 18:10:44,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:44,041 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:44,042 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:44,042 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:44,042 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:44,048 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:44,051 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,052 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e empty. 2023-07-24 18:10:44,053 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,053 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 18:10:44,081 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:44,082 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => ae8512bb4df93efa93f18637f1cf876e, NAME => 'GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:44,108 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:44,108 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing ae8512bb4df93efa93f18637f1cf876e, disabling compactions & flushes 2023-07-24 18:10:44,109 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,109 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,109 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. after waiting 0 ms 2023-07-24 18:10:44,109 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,109 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,109 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for ae8512bb4df93efa93f18637f1cf876e: 2023-07-24 18:10:44,112 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:44,113 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222244113"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222244113"}]},"ts":"1690222244113"} 2023-07-24 18:10:44,121 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:44,122 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:44,122 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222244122"}]},"ts":"1690222244122"} 2023-07-24 18:10:44,123 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-24 18:10:44,132 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:44,132 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:44,132 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:44,132 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:44,132 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:44,132 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, ASSIGN}] 2023-07-24 18:10:44,135 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, ASSIGN 2023-07-24 18:10:44,136 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,40159,1690222227976; forceNewPlan=false, retain=false 2023-07-24 18:10:44,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:44,286 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:44,288 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:44,288 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222244288"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222244288"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222244288"}]},"ts":"1690222244288"} 2023-07-24 18:10:44,290 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:44,343 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:44,446 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae8512bb4df93efa93f18637f1cf876e, NAME => 'GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:44,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:44,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,448 INFO [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,450 DEBUG [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/f 2023-07-24 18:10:44,450 DEBUG [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/f 2023-07-24 18:10:44,451 INFO [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae8512bb4df93efa93f18637f1cf876e columnFamilyName f 2023-07-24 18:10:44,451 INFO [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] regionserver.HStore(310): Store=ae8512bb4df93efa93f18637f1cf876e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:44,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:44,478 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:44,479 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae8512bb4df93efa93f18637f1cf876e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11379184320, jitterRate=0.059769123792648315}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:44,479 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae8512bb4df93efa93f18637f1cf876e: 2023-07-24 18:10:44,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e., pid=96, masterSystemTime=1690222244442 2023-07-24 18:10:44,481 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:44,482 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:44,482 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222244482"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222244482"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222244482"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222244482"}]},"ts":"1690222244482"} 2023-07-24 18:10:44,485 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-24 18:10:44,485 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,40159,1690222227976 in 193 msec 2023-07-24 18:10:44,487 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-24 18:10:44,487 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, ASSIGN in 353 msec 2023-07-24 18:10:44,488 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:44,488 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222244488"}]},"ts":"1690222244488"} 2023-07-24 18:10:44,490 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-24 18:10:44,493 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:44,495 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 460 msec 2023-07-24 18:10:44,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-24 18:10:44,644 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-24 18:10:44,644 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-24 18:10:44,645 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:44,649 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-24 18:10:44,649 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:44,649 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-24 18:10:44,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:44,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:44,657 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:44,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-24 18:10:44,658 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:44,660 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:44,661 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:44,661 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:44,662 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:44,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:44,881 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:44,883 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:44,887 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c empty. 2023-07-24 18:10:44,888 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:44,888 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 18:10:44,933 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:44,935 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => a743d274fc452a34cb7613ab706c5b3c, NAME => 'GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:44,959 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:44,959 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing a743d274fc452a34cb7613ab706c5b3c, disabling compactions & flushes 2023-07-24 18:10:44,959 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:44,960 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:44,960 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. after waiting 0 ms 2023-07-24 18:10:44,960 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:44,960 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:44,960 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for a743d274fc452a34cb7613ab706c5b3c: 2023-07-24 18:10:44,964 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:44,965 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222244965"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222244965"}]},"ts":"1690222244965"} 2023-07-24 18:10:44,967 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:44,967 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:44,968 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222244968"}]},"ts":"1690222244968"} 2023-07-24 18:10:44,969 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-24 18:10:44,973 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:44,973 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:44,973 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:44,973 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:44,973 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:44,973 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, ASSIGN}] 2023-07-24 18:10:44,976 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, ASSIGN 2023-07-24 18:10:44,982 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:45,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:45,133 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:45,134 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:45,134 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245134"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222245134"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222245134"}]},"ts":"1690222245134"} 2023-07-24 18:10:45,136 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:45,291 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a743d274fc452a34cb7613ab706c5b3c, NAME => 'GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:45,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:45,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,292 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,294 INFO [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,296 DEBUG [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/f 2023-07-24 18:10:45,296 DEBUG [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/f 2023-07-24 18:10:45,296 INFO [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a743d274fc452a34cb7613ab706c5b3c columnFamilyName f 2023-07-24 18:10:45,297 INFO [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] regionserver.HStore(310): Store=a743d274fc452a34cb7613ab706c5b3c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:45,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,303 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:45,304 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a743d274fc452a34cb7613ab706c5b3c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10870502240, jitterRate=0.012394413352012634}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:45,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a743d274fc452a34cb7613ab706c5b3c: 2023-07-24 18:10:45,305 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c., pid=99, masterSystemTime=1690222245288 2023-07-24 18:10:45,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,307 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:45,307 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245307"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222245307"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222245307"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222245307"}]},"ts":"1690222245307"} 2023-07-24 18:10:45,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-24 18:10:45,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,42261,1690222228228 in 172 msec 2023-07-24 18:10:45,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-24 18:10:45,312 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, ASSIGN in 337 msec 2023-07-24 18:10:45,312 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:45,312 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222245312"}]},"ts":"1690222245312"} 2023-07-24 18:10:45,314 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-24 18:10:45,316 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:45,317 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 665 msec 2023-07-24 18:10:45,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-24 18:10:45,379 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-24 18:10:45,379 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-24 18:10:45,379 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:45,384 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-24 18:10:45,385 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:45,385 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-24 18:10:45,386 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:45,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 18:10:45,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:45,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 18:10:45,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:45,399 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_988431236 2023-07-24 18:10:45,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_988431236 2023-07-24 18:10:45,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:45,406 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:45,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:45,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:45,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_988431236 2023-07-24 18:10:45,410 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region a743d274fc452a34cb7613ab706c5b3c to RSGroup Group_testMultiTableMove_988431236 2023-07-24 18:10:45,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, REOPEN/MOVE 2023-07-24 18:10:45,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_988431236 2023-07-24 18:10:45,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region ae8512bb4df93efa93f18637f1cf876e to RSGroup Group_testMultiTableMove_988431236 2023-07-24 18:10:45,412 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, REOPEN/MOVE 2023-07-24 18:10:45,413 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, REOPEN/MOVE 2023-07-24 18:10:45,413 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:45,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_988431236, current retry=0 2023-07-24 18:10:45,414 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, REOPEN/MOVE 2023-07-24 18:10:45,414 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245413"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222245413"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222245413"}]},"ts":"1690222245413"} 2023-07-24 18:10:45,415 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:45,415 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245415"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222245415"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222245415"}]},"ts":"1690222245415"} 2023-07-24 18:10:45,415 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:45,416 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,40159,1690222227976}] 2023-07-24 18:10:45,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a743d274fc452a34cb7613ab706c5b3c, disabling compactions & flushes 2023-07-24 18:10:45,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. after waiting 0 ms 2023-07-24 18:10:45,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae8512bb4df93efa93f18637f1cf876e, disabling compactions & flushes 2023-07-24 18:10:45,571 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. after waiting 0 ms 2023-07-24 18:10:45,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:45,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:45,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a743d274fc452a34cb7613ab706c5b3c: 2023-07-24 18:10:45,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a743d274fc452a34cb7613ab706c5b3c move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:45,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,575 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae8512bb4df93efa93f18637f1cf876e: 2023-07-24 18:10:45,575 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ae8512bb4df93efa93f18637f1cf876e move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:45,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,577 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=CLOSED 2023-07-24 18:10:45,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,577 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245577"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222245577"}]},"ts":"1690222245577"} 2023-07-24 18:10:45,578 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=CLOSED 2023-07-24 18:10:45,578 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245578"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222245578"}]},"ts":"1690222245578"} 2023-07-24 18:10:45,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-24 18:10:45,581 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,42261,1690222228228 in 164 msec 2023-07-24 18:10:45,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-24 18:10:45,581 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,40159,1690222227976 in 163 msec 2023-07-24 18:10:45,582 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:45,582 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:45,732 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:45,732 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:45,732 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245732"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222245732"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222245732"}]},"ts":"1690222245732"} 2023-07-24 18:10:45,733 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245732"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222245732"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222245732"}]},"ts":"1690222245732"} 2023-07-24 18:10:45,734 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=101, state=RUNNABLE; OpenRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:45,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=100, state=RUNNABLE; OpenRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:45,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ae8512bb4df93efa93f18637f1cf876e, NAME => 'GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:45,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:45,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,892 INFO [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,893 DEBUG [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/f 2023-07-24 18:10:45,893 DEBUG [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/f 2023-07-24 18:10:45,893 INFO [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ae8512bb4df93efa93f18637f1cf876e columnFamilyName f 2023-07-24 18:10:45,894 INFO [StoreOpener-ae8512bb4df93efa93f18637f1cf876e-1] regionserver.HStore(310): Store=ae8512bb4df93efa93f18637f1cf876e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:45,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,896 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,899 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:45,900 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ae8512bb4df93efa93f18637f1cf876e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10605405440, jitterRate=-0.012294650077819824}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:45,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ae8512bb4df93efa93f18637f1cf876e: 2023-07-24 18:10:45,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e., pid=104, masterSystemTime=1690222245886 2023-07-24 18:10:45,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:45,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a743d274fc452a34cb7613ab706c5b3c, NAME => 'GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:45,903 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:45,903 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245903"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222245903"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222245903"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222245903"}]},"ts":"1690222245903"} 2023-07-24 18:10:45,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:45,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,905 INFO [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,906 DEBUG [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/f 2023-07-24 18:10:45,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=101 2023-07-24 18:10:45,906 DEBUG [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/f 2023-07-24 18:10:45,906 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=101, state=SUCCESS; OpenRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,34389,1690222232023 in 171 msec 2023-07-24 18:10:45,907 INFO [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a743d274fc452a34cb7613ab706c5b3c columnFamilyName f 2023-07-24 18:10:45,909 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, REOPEN/MOVE in 494 msec 2023-07-24 18:10:45,911 INFO [StoreOpener-a743d274fc452a34cb7613ab706c5b3c-1] regionserver.HStore(310): Store=a743d274fc452a34cb7613ab706c5b3c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:45,911 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,916 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:45,917 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a743d274fc452a34cb7613ab706c5b3c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11790539200, jitterRate=0.09807953238487244}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:45,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a743d274fc452a34cb7613ab706c5b3c: 2023-07-24 18:10:45,918 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c., pid=105, masterSystemTime=1690222245886 2023-07-24 18:10:45,919 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,919 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:45,919 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:45,919 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222245919"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222245919"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222245919"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222245919"}]},"ts":"1690222245919"} 2023-07-24 18:10:45,923 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=100 2023-07-24 18:10:45,923 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=100, state=SUCCESS; OpenRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,34389,1690222232023 in 186 msec 2023-07-24 18:10:45,926 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, REOPEN/MOVE in 513 msec 2023-07-24 18:10:46,414 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-24 18:10:46,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_988431236. 2023-07-24 18:10:46,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:46,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:46,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:46,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:46,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-24 18:10:46,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:46,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:46,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:46,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_988431236 2023-07-24 18:10:46,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:46,429 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-24 18:10:46,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-24 18:10:46,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 18:10:46,434 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222246434"}]},"ts":"1690222246434"} 2023-07-24 18:10:46,435 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-24 18:10:46,437 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-24 18:10:46,449 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, UNASSIGN}] 2023-07-24 18:10:46,451 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, UNASSIGN 2023-07-24 18:10:46,455 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:46,455 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222246455"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222246455"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222246455"}]},"ts":"1690222246455"} 2023-07-24 18:10:46,458 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:46,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 18:10:46,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:46,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ae8512bb4df93efa93f18637f1cf876e, disabling compactions & flushes 2023-07-24 18:10:46,613 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:46,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:46,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. after waiting 0 ms 2023-07-24 18:10:46,613 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:46,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:46,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e. 2023-07-24 18:10:46,618 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ae8512bb4df93efa93f18637f1cf876e: 2023-07-24 18:10:46,620 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:46,621 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=ae8512bb4df93efa93f18637f1cf876e, regionState=CLOSED 2023-07-24 18:10:46,621 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222246621"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222246621"}]},"ts":"1690222246621"} 2023-07-24 18:10:46,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-24 18:10:46,626 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure ae8512bb4df93efa93f18637f1cf876e, server=jenkins-hbase4.apache.org,34389,1690222232023 in 165 msec 2023-07-24 18:10:46,628 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-24 18:10:46,628 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=ae8512bb4df93efa93f18637f1cf876e, UNASSIGN in 180 msec 2023-07-24 18:10:46,629 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222246629"}]},"ts":"1690222246629"} 2023-07-24 18:10:46,631 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-24 18:10:46,634 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-24 18:10:46,640 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 206 msec 2023-07-24 18:10:46,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-24 18:10:46,736 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-24 18:10:46,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-24 18:10:46,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,741 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_988431236' 2023-07-24 18:10:46,741 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:46,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:46,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:46,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:46,746 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:46,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-24 18:10:46,748 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/recovered.edits] 2023-07-24 18:10:46,755 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e/recovered.edits/7.seqid 2023-07-24 18:10:46,756 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveA/ae8512bb4df93efa93f18637f1cf876e 2023-07-24 18:10:46,756 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-24 18:10:46,759 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,762 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-24 18:10:46,764 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-24 18:10:46,766 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,766 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-24 18:10:46,766 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222246766"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:46,769 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:46,769 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => ae8512bb4df93efa93f18637f1cf876e, NAME => 'GrouptestMultiTableMoveA,,1690222244033.ae8512bb4df93efa93f18637f1cf876e.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:46,769 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-24 18:10:46,769 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222246769"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:46,771 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-24 18:10:46,773 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-24 18:10:46,774 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 35 msec 2023-07-24 18:10:46,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-24 18:10:46,850 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-24 18:10:46,850 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-24 18:10:46,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-24 18:10:46,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:46,864 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222246864"}]},"ts":"1690222246864"} 2023-07-24 18:10:46,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 18:10:46,866 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-24 18:10:46,869 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-24 18:10:46,870 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, UNASSIGN}] 2023-07-24 18:10:46,871 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, UNASSIGN 2023-07-24 18:10:46,872 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:46,872 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222246872"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222246872"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222246872"}]},"ts":"1690222246872"} 2023-07-24 18:10:46,874 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:46,944 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:46,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 18:10:47,028 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:47,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a743d274fc452a34cb7613ab706c5b3c, disabling compactions & flushes 2023-07-24 18:10:47,029 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:47,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:47,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. after waiting 0 ms 2023-07-24 18:10:47,029 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:47,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:47,035 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c. 2023-07-24 18:10:47,036 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a743d274fc452a34cb7613ab706c5b3c: 2023-07-24 18:10:47,037 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:47,038 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=a743d274fc452a34cb7613ab706c5b3c, regionState=CLOSED 2023-07-24 18:10:47,038 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690222247038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222247038"}]},"ts":"1690222247038"} 2023-07-24 18:10:47,044 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-24 18:10:47,044 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure a743d274fc452a34cb7613ab706c5b3c, server=jenkins-hbase4.apache.org,34389,1690222232023 in 167 msec 2023-07-24 18:10:47,045 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-24 18:10:47,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=a743d274fc452a34cb7613ab706c5b3c, UNASSIGN in 174 msec 2023-07-24 18:10:47,046 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222247046"}]},"ts":"1690222247046"} 2023-07-24 18:10:47,048 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-24 18:10:47,050 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-24 18:10:47,053 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 200 msec 2023-07-24 18:10:47,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-24 18:10:47,169 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-24 18:10:47,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-24 18:10:47,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:47,178 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:47,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_988431236' 2023-07-24 18:10:47,180 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:47,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:47,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,184 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,190 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:47,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 18:10:47,193 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/recovered.edits] 2023-07-24 18:10:47,201 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/recovered.edits/7.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c/recovered.edits/7.seqid 2023-07-24 18:10:47,202 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/GrouptestMultiTableMoveB/a743d274fc452a34cb7613ab706c5b3c 2023-07-24 18:10:47,202 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-24 18:10:47,205 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:47,208 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-24 18:10:47,210 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-24 18:10:47,211 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:47,211 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-24 18:10:47,212 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222247211"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:47,213 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:47,213 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a743d274fc452a34cb7613ab706c5b3c, NAME => 'GrouptestMultiTableMoveB,,1690222244651.a743d274fc452a34cb7613ab706c5b3c.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:47,213 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-24 18:10:47,213 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222247213"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:47,215 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-24 18:10:47,217 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-24 18:10:47,218 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 47 msec 2023-07-24 18:10:47,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-24 18:10:47,294 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-24 18:10:47,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,299 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389] to rsgroup default 2023-07-24 18:10:47,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_988431236 2023-07-24 18:10:47,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_988431236, current retry=0 2023-07-24 18:10:47,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023] are moved back to Group_testMultiTableMove_988431236 2023-07-24 18:10:47,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_988431236 => default 2023-07-24 18:10:47,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_988431236 2023-07-24 18:10:47,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:47,326 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:47,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:47,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:47,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,339 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:47,340 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:47,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,342 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,344 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:47,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:47,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 510 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223447352, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:47,354 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:47,356 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:47,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,357 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:47,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,378 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=509 (was 512), OpenFileDescriptor=791 (was 791), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=599 (was 599), ProcessCount=177 (was 177), AvailableMemoryMB=5547 (was 5677) 2023-07-24 18:10:47,378 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-24 18:10:47,397 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=509, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=599, ProcessCount=177, AvailableMemoryMB=5546 2023-07-24 18:10:47,397 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-24 18:10:47,398 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-24 18:10:47,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,403 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:47,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:47,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:47,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,412 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:47,413 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:47,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,415 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:47,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:47,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 538 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223447423, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:47,423 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:47,425 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:47,425 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,426 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:47,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-24 18:10:47,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup oldGroup 2023-07-24 18:10:47,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,446 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:47,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to default 2023-07-24 18:10:47,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-24 18:10:47,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 18:10:47,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-24 18:10:47,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-24 18:10:47,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 18:10:47,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,459 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:47,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261] to rsgroup anotherRSGroup 2023-07-24 18:10:47,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 18:10:47,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:47,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:47,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42261,1690222228228] are moved back to default 2023-07-24 18:10:47,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-24 18:10:47,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 18:10:47,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-24 18:10:47,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,485 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-24 18:10:47,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:42402 deadline: 1690223447485, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-24 18:10:47,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-24 18:10:47,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:42402 deadline: 1690223447487, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-24 18:10:47,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-24 18:10:47,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:42402 deadline: 1690223447488, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-24 18:10:47,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-24 18:10:47,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 578 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:42402 deadline: 1690223447489, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-24 18:10:47,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261] to rsgroup default 2023-07-24 18:10:47,497 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-24 18:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:47,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-24 18:10:47,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42261,1690222228228] are moved back to anotherRSGroup 2023-07-24 18:10:47,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-24 18:10:47,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-24 18:10:47,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 18:10:47,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup default 2023-07-24 18:10:47,514 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-24 18:10:47,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,515 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-24 18:10:47,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to oldGroup 2023-07-24 18:10:47,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-24 18:10:47,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-24 18:10:47,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,527 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:47,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:47,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,532 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:47,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:47,537 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,540 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:47,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:47,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:47,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:47,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 614 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223447554, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:47,555 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:47,557 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:47,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,560 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:47,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,580 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=513 (was 509) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-21 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=791 (was 791), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=599 (was 599), ProcessCount=177 (was 177), AvailableMemoryMB=5544 (was 5546) 2023-07-24 18:10:47,580 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-24 18:10:47,598 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=513, OpenFileDescriptor=791, MaxFileDescriptor=60000, SystemLoadAverage=599, ProcessCount=177, AvailableMemoryMB=5543 2023-07-24 18:10:47,598 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-24 18:10:47,599 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-24 18:10:47,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:47,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:47,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:47,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:47,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:47,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:47,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:47,612 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:47,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:47,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:47,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:47,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:47,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 642 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223447622, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:47,623 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:47,624 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:47,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,625 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:47,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:47,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-24 18:10:47,629 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:47,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:47,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup oldgroup 2023-07-24 18:10:47,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:47,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to default 2023-07-24 18:10:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-24 18:10:47,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:47,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:47,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:47,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 18:10:47,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:47,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:47,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-24 18:10:47,655 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:47,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-24 18:10:47,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 18:10:47,657 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:47,658 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:47,658 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:47,659 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:47,661 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:47,663 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:47,663 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/testRename/d18956f41694b9a75bceceb81de91192 empty. 2023-07-24 18:10:47,664 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:47,664 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-24 18:10:47,687 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:47,692 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => d18956f41694b9a75bceceb81de91192, NAME => 'testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:47,716 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:47,717 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing d18956f41694b9a75bceceb81de91192, disabling compactions & flushes 2023-07-24 18:10:47,717 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:47,717 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:47,717 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. after waiting 0 ms 2023-07-24 18:10:47,717 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:47,717 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:47,717 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:47,720 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:47,721 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222247721"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222247721"}]},"ts":"1690222247721"} 2023-07-24 18:10:47,723 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:47,724 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:47,724 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222247724"}]},"ts":"1690222247724"} 2023-07-24 18:10:47,726 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-24 18:10:47,743 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:47,743 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:47,743 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:47,743 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:47,744 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, ASSIGN}] 2023-07-24 18:10:47,747 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, ASSIGN 2023-07-24 18:10:47,749 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:47,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 18:10:47,899 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:47,901 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:47,901 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222247901"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222247901"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222247901"}]},"ts":"1690222247901"} 2023-07-24 18:10:47,902 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:47,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 18:10:48,058 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d18956f41694b9a75bceceb81de91192, NAME => 'testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,060 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,062 DEBUG [StoreOpener-d18956f41694b9a75bceceb81de91192-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/tr 2023-07-24 18:10:48,062 DEBUG [StoreOpener-d18956f41694b9a75bceceb81de91192-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/tr 2023-07-24 18:10:48,063 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d18956f41694b9a75bceceb81de91192 columnFamilyName tr 2023-07-24 18:10:48,063 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] regionserver.HStore(310): Store=d18956f41694b9a75bceceb81de91192/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:48,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,067 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:48,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d18956f41694b9a75bceceb81de91192; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10416235680, jitterRate=-0.02991245687007904}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:48,071 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:48,072 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690222247652.d18956f41694b9a75bceceb81de91192., pid=116, masterSystemTime=1690222248054 2023-07-24 18:10:48,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,074 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,074 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:48,074 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222248074"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222248074"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222248074"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222248074"}]},"ts":"1690222248074"} 2023-07-24 18:10:48,078 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-24 18:10:48,078 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,46109,1690222228457 in 174 msec 2023-07-24 18:10:48,079 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-24 18:10:48,079 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, ASSIGN in 334 msec 2023-07-24 18:10:48,080 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:48,080 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222248080"}]},"ts":"1690222248080"} 2023-07-24 18:10:48,082 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-24 18:10:48,084 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:48,085 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 432 msec 2023-07-24 18:10:48,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-24 18:10:48,260 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-24 18:10:48,260 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-24 18:10:48,261 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:48,265 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-24 18:10:48,266 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:48,266 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-24 18:10:48,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-24 18:10:48,272 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:48,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:48,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:48,274 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:48,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-24 18:10:48,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region d18956f41694b9a75bceceb81de91192 to RSGroup oldgroup 2023-07-24 18:10:48,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:48,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:48,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:48,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:48,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:48,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE 2023-07-24 18:10:48,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-24 18:10:48,277 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE 2023-07-24 18:10:48,278 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:48,278 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222248278"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248278"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248278"}]},"ts":"1690222248278"} 2023-07-24 18:10:48,280 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:48,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d18956f41694b9a75bceceb81de91192, disabling compactions & flushes 2023-07-24 18:10:48,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. after waiting 0 ms 2023-07-24 18:10:48,434 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,438 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:48,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:48,439 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d18956f41694b9a75bceceb81de91192 move to jenkins-hbase4.apache.org,34389,1690222232023 record at close sequenceid=2 2023-07-24 18:10:48,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,442 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=CLOSED 2023-07-24 18:10:48,442 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222248442"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222248442"}]},"ts":"1690222248442"} 2023-07-24 18:10:48,444 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-24 18:10:48,445 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,46109,1690222228457 in 163 msec 2023-07-24 18:10:48,445 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,34389,1690222232023; forceNewPlan=false, retain=false 2023-07-24 18:10:48,595 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:48,596 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:48,596 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222248596"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222248596"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222248596"}]},"ts":"1690222248596"} 2023-07-24 18:10:48,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:48,758 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d18956f41694b9a75bceceb81de91192, NAME => 'testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:48,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:48,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,761 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,762 DEBUG [StoreOpener-d18956f41694b9a75bceceb81de91192-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/tr 2023-07-24 18:10:48,762 DEBUG [StoreOpener-d18956f41694b9a75bceceb81de91192-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/tr 2023-07-24 18:10:48,763 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d18956f41694b9a75bceceb81de91192 columnFamilyName tr 2023-07-24 18:10:48,764 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] regionserver.HStore(310): Store=d18956f41694b9a75bceceb81de91192/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:48,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,772 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:48,778 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d18956f41694b9a75bceceb81de91192; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9633481280, jitterRate=-0.10281214118003845}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:48,778 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:48,779 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690222247652.d18956f41694b9a75bceceb81de91192., pid=119, masterSystemTime=1690222248754 2023-07-24 18:10:48,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,782 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:48,783 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:48,783 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222248783"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222248783"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222248783"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222248783"}]},"ts":"1690222248783"} 2023-07-24 18:10:48,787 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-24 18:10:48,787 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,34389,1690222232023 in 187 msec 2023-07-24 18:10:48,788 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE in 511 msec 2023-07-24 18:10:49,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-24 18:10:49,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-24 18:10:49,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:49,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:49,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:49,285 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:49,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 18:10:49,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:49,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-24 18:10:49,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:49,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 18:10:49,288 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:49,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:49,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:49,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-24 18:10:49,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:49,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:49,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:49,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:49,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:49,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:49,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:49,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:49,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261] to rsgroup normal 2023-07-24 18:10:49,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:49,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:49,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:49,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:49,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:49,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:49,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42261,1690222228228] are moved back to default 2023-07-24 18:10:49,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-24 18:10:49,313 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:49,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:49,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:49,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 18:10:49,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:49,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:49,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-24 18:10:49,323 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:49,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-24 18:10:49,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 18:10:49,330 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:49,330 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:49,330 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:49,331 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:49,331 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:49,334 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:49,335 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,336 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 empty. 2023-07-24 18:10:49,337 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,337 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-24 18:10:49,362 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:49,364 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => b87f0cecb71881f8123dd940c9454207, NAME => 'unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:49,386 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,386 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing b87f0cecb71881f8123dd940c9454207, disabling compactions & flushes 2023-07-24 18:10:49,387 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,387 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,387 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. after waiting 0 ms 2023-07-24 18:10:49,387 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,387 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,387 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:49,398 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:49,401 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222249401"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222249401"}]},"ts":"1690222249401"} 2023-07-24 18:10:49,404 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:49,406 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:49,406 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222249406"}]},"ts":"1690222249406"} 2023-07-24 18:10:49,407 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-24 18:10:49,412 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, ASSIGN}] 2023-07-24 18:10:49,414 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, ASSIGN 2023-07-24 18:10:49,415 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:49,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 18:10:49,567 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:49,567 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222249567"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249567"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249567"}]},"ts":"1690222249567"} 2023-07-24 18:10:49,569 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:49,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 18:10:49,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b87f0cecb71881f8123dd940c9454207, NAME => 'unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:49,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:49,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,728 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,730 DEBUG [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/ut 2023-07-24 18:10:49,730 DEBUG [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/ut 2023-07-24 18:10:49,730 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b87f0cecb71881f8123dd940c9454207 columnFamilyName ut 2023-07-24 18:10:49,731 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] regionserver.HStore(310): Store=b87f0cecb71881f8123dd940c9454207/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:49,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:49,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:49,750 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b87f0cecb71881f8123dd940c9454207; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10622060960, jitterRate=-0.010743483901023865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:49,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:49,751 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207., pid=122, masterSystemTime=1690222249721 2023-07-24 18:10:49,753 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,753 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:49,755 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:49,755 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222249754"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222249754"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222249754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222249754"}]},"ts":"1690222249754"} 2023-07-24 18:10:49,759 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-24 18:10:49,759 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,46109,1690222228457 in 187 msec 2023-07-24 18:10:49,762 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-24 18:10:49,762 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, ASSIGN in 347 msec 2023-07-24 18:10:49,763 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:49,763 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222249763"}]},"ts":"1690222249763"} 2023-07-24 18:10:49,765 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-24 18:10:49,769 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:49,779 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 449 msec 2023-07-24 18:10:49,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-24 18:10:49,929 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-24 18:10:49,929 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-24 18:10:49,929 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:49,936 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-24 18:10:49,936 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:49,936 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-24 18:10:49,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-24 18:10:49,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-24 18:10:49,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:49,941 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:49,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:49,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:49,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-24 18:10:49,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region b87f0cecb71881f8123dd940c9454207 to RSGroup normal 2023-07-24 18:10:49,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE 2023-07-24 18:10:49,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-24 18:10:49,945 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE 2023-07-24 18:10:49,946 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:49,946 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222249946"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222249946"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222249946"}]},"ts":"1690222249946"} 2023-07-24 18:10:49,947 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:50,101 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b87f0cecb71881f8123dd940c9454207, disabling compactions & flushes 2023-07-24 18:10:50,102 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. after waiting 0 ms 2023-07-24 18:10:50,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:50,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:50,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b87f0cecb71881f8123dd940c9454207 move to jenkins-hbase4.apache.org,42261,1690222228228 record at close sequenceid=2 2023-07-24 18:10:50,111 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=CLOSED 2023-07-24 18:10:50,111 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222250111"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222250111"}]},"ts":"1690222250111"} 2023-07-24 18:10:50,112 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,116 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-24 18:10:50,116 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,46109,1690222228457 in 167 msec 2023-07-24 18:10:50,116 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:50,267 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:50,267 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222250267"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222250267"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222250267"}]},"ts":"1690222250267"} 2023-07-24 18:10:50,269 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:50,299 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-24 18:10:50,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b87f0cecb71881f8123dd940c9454207, NAME => 'unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:50,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:50,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,432 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,433 DEBUG [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/ut 2023-07-24 18:10:50,433 DEBUG [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/ut 2023-07-24 18:10:50,433 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b87f0cecb71881f8123dd940c9454207 columnFamilyName ut 2023-07-24 18:10:50,434 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] regionserver.HStore(310): Store=b87f0cecb71881f8123dd940c9454207/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:50,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:50,441 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b87f0cecb71881f8123dd940c9454207; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9848433120, jitterRate=-0.08279319107532501}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:50,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:50,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207., pid=125, masterSystemTime=1690222250421 2023-07-24 18:10:50,443 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:50,444 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:50,444 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222250444"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222250444"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222250444"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222250444"}]},"ts":"1690222250444"} 2023-07-24 18:10:50,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-24 18:10:50,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,42261,1690222228228 in 177 msec 2023-07-24 18:10:50,449 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE in 504 msec 2023-07-24 18:10:50,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-24 18:10:50,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-24 18:10:50,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:50,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,951 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:50,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 18:10:50,951 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:50,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-24 18:10:50,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:50,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 18:10:50,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:50,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-24 18:10:50,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:50,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:50,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:50,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-24 18:10:50,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-24 18:10:50,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-24 18:10:50,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:50,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-24 18:10:50,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:50,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-24 18:10:50,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:50,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:50,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:50,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-24 18:10:50,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:50,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:50,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:50,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:50,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:50,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-24 18:10:50,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region b87f0cecb71881f8123dd940c9454207 to RSGroup default 2023-07-24 18:10:50,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE 2023-07-24 18:10:50,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 18:10:50,984 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE 2023-07-24 18:10:50,985 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:50,985 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222250985"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222250985"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222250985"}]},"ts":"1690222250985"} 2023-07-24 18:10:50,986 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:51,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b87f0cecb71881f8123dd940c9454207, disabling compactions & flushes 2023-07-24 18:10:51,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. after waiting 0 ms 2023-07-24 18:10:51,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,144 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:51,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:51,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b87f0cecb71881f8123dd940c9454207 move to jenkins-hbase4.apache.org,46109,1690222228457 record at close sequenceid=5 2023-07-24 18:10:51,146 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,147 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=CLOSED 2023-07-24 18:10:51,147 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222251147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222251147"}]},"ts":"1690222251147"} 2023-07-24 18:10:51,150 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-24 18:10:51,150 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,42261,1690222228228 in 162 msec 2023-07-24 18:10:51,151 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:51,301 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:51,301 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222251301"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222251301"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222251301"}]},"ts":"1690222251301"} 2023-07-24 18:10:51,303 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:51,461 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b87f0cecb71881f8123dd940c9454207, NAME => 'unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:51,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:51,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,461 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,464 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,465 DEBUG [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/ut 2023-07-24 18:10:51,465 DEBUG [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/ut 2023-07-24 18:10:51,465 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b87f0cecb71881f8123dd940c9454207 columnFamilyName ut 2023-07-24 18:10:51,466 INFO [StoreOpener-b87f0cecb71881f8123dd940c9454207-1] regionserver.HStore(310): Store=b87f0cecb71881f8123dd940c9454207/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:51,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,468 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:51,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b87f0cecb71881f8123dd940c9454207; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12031233920, jitterRate=0.12049597501754761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:51,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:51,474 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207., pid=128, masterSystemTime=1690222251455 2023-07-24 18:10:51,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:51,476 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=b87f0cecb71881f8123dd940c9454207, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:51,476 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690222251476"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222251476"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222251476"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222251476"}]},"ts":"1690222251476"} 2023-07-24 18:10:51,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-24 18:10:51,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure b87f0cecb71881f8123dd940c9454207, server=jenkins-hbase4.apache.org,46109,1690222228457 in 174 msec 2023-07-24 18:10:51,480 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b87f0cecb71881f8123dd940c9454207, REOPEN/MOVE in 496 msec 2023-07-24 18:10:51,984 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-24 18:10:51,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-24 18:10:51,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:51,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:42261] to rsgroup default 2023-07-24 18:10:51,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-24 18:10:51,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:51,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:51,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:51,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:10:51,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-24 18:10:51,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,42261,1690222228228] are moved back to normal 2023-07-24 18:10:51,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-24 18:10:51,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:51,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-24 18:10:51,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:51,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:51,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:51,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 18:10:52,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:52,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:52,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:52,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:52,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:52,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:52,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:52,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:52,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:52,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:52,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-24 18:10:52,014 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:52,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:52,016 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:52,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-24 18:10:52,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(345): Moving region d18956f41694b9a75bceceb81de91192 to RSGroup default 2023-07-24 18:10:52,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE 2023-07-24 18:10:52,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-24 18:10:52,019 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE 2023-07-24 18:10:52,020 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:52,020 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222252020"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222252020"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222252020"}]},"ts":"1690222252020"} 2023-07-24 18:10:52,023 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,34389,1690222232023}] 2023-07-24 18:10:52,120 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-24 18:10:52,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d18956f41694b9a75bceceb81de91192, disabling compactions & flushes 2023-07-24 18:10:52,185 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. after waiting 0 ms 2023-07-24 18:10:52,186 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,221 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-24 18:10:52,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:52,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding d18956f41694b9a75bceceb81de91192 move to jenkins-hbase4.apache.org,42261,1690222228228 record at close sequenceid=5 2023-07-24 18:10:52,227 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,227 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=CLOSED 2023-07-24 18:10:52,228 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222252227"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222252227"}]},"ts":"1690222252227"} 2023-07-24 18:10:52,232 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-24 18:10:52,232 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,34389,1690222232023 in 207 msec 2023-07-24 18:10:52,235 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:52,385 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:52,386 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:52,386 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222252386"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222252386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222252386"}]},"ts":"1690222252386"} 2023-07-24 18:10:52,388 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:52,544 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d18956f41694b9a75bceceb81de91192, NAME => 'testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:52,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:52,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,546 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,547 DEBUG [StoreOpener-d18956f41694b9a75bceceb81de91192-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/tr 2023-07-24 18:10:52,547 DEBUG [StoreOpener-d18956f41694b9a75bceceb81de91192-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/tr 2023-07-24 18:10:52,547 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d18956f41694b9a75bceceb81de91192 columnFamilyName tr 2023-07-24 18:10:52,548 INFO [StoreOpener-d18956f41694b9a75bceceb81de91192-1] regionserver.HStore(310): Store=d18956f41694b9a75bceceb81de91192/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:52,549 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,550 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,553 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:52,554 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d18956f41694b9a75bceceb81de91192; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11226249760, jitterRate=0.04552598297595978}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:52,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:52,555 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690222247652.d18956f41694b9a75bceceb81de91192., pid=131, masterSystemTime=1690222252540 2023-07-24 18:10:52,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:52,558 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=d18956f41694b9a75bceceb81de91192, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:52,558 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690222252558"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222252558"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222252558"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222252558"}]},"ts":"1690222252558"} 2023-07-24 18:10:52,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-24 18:10:52,561 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure d18956f41694b9a75bceceb81de91192, server=jenkins-hbase4.apache.org,42261,1690222228228 in 171 msec 2023-07-24 18:10:52,562 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=d18956f41694b9a75bceceb81de91192, REOPEN/MOVE in 543 msec 2023-07-24 18:10:53,019 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-24 18:10:53,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-24 18:10:53,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:53,020 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup default 2023-07-24 18:10:53,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-24 18:10:53,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-24 18:10:53,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to newgroup 2023-07-24 18:10:53,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-24 18:10:53,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-24 18:10:53,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:53,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:53,034 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:53,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:53,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:53,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 762 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223453047, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:53,047 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:53,049 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,050 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:53,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,068 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=505 (was 513), OpenFileDescriptor=771 (was 791), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=591 (was 599), ProcessCount=177 (was 177), AvailableMemoryMB=5412 (was 5543) 2023-07-24 18:10:53,068 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-24 18:10:53,087 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=505, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=591, ProcessCount=177, AvailableMemoryMB=5411 2023-07-24 18:10:53,087 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-24 18:10:53,088 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-24 18:10:53,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:53,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:53,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:53,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:53,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:53,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:53,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:53,102 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:53,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:53,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:53,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 790 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223453112, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:53,112 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:53,114 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,115 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:53,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-24 18:10:53,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:53,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-24 18:10:53,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-24 18:10:53,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-24 18:10:53,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-24 18:10:53,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 802 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:42402 deadline: 1690223453123, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-24 18:10:53,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-24 18:10:53,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:42402 deadline: 1690223453125, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 18:10:53,127 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:10:53,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-24 18:10:53,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-24 18:10:53,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 809 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:42402 deadline: 1690223453133, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-24 18:10:53,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:53,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:53,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:53,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:53,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:53,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:53,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:53,147 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:53,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:53,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:53,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 833 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223453157, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:53,160 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:53,162 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,162 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:53,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,163 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,180 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=509 (was 505) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x6dc94268-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=771 (was 771), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=591 (was 591), ProcessCount=177 (was 177), AvailableMemoryMB=5410 (was 5411) 2023-07-24 18:10:53,180 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-24 18:10:53,196 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=509, OpenFileDescriptor=771, MaxFileDescriptor=60000, SystemLoadAverage=591, ProcessCount=177, AvailableMemoryMB=5409 2023-07-24 18:10:53,196 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=509 is superior to 500 2023-07-24 18:10:53,196 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-24 18:10:53,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:53,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:53,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:53,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:53,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:53,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:53,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:53,208 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:53,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:53,210 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:53,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:53,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:53,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 861 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223453219, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:53,219 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:53,221 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,221 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,222 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:53,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:53,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:53,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:53,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:53,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-24 18:10:53,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to default 2023-07-24 18:10:53,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:53,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:53,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:53,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:53,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:53,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:53,262 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:53,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-24 18:10:53,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 18:10:53,264 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:53,265 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1308945965 2023-07-24 18:10:53,265 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:53,266 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:53,269 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:53,274 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,274 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,274 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,274 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,274 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da empty. 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 empty. 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 empty. 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f empty. 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 empty. 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,276 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,277 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,277 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,277 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,277 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 18:10:53,293 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:53,295 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 01b3764993ad038e9da8406f6ff18566, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:53,295 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9a1ba048b11f75a31d1d51408f70547f, NAME => 'Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:53,295 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 478d9fc727054514b981dadffdc113da, NAME => 'Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:53,331 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,331 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 9a1ba048b11f75a31d1d51408f70547f, disabling compactions & flushes 2023-07-24 18:10:53,331 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,331 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,331 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. after waiting 0 ms 2023-07-24 18:10:53,331 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,331 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,331 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 9a1ba048b11f75a31d1d51408f70547f: 2023-07-24 18:10:53,332 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0585fec20b1415c488977ce859dde9c2, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:53,332 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,332 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 478d9fc727054514b981dadffdc113da, disabling compactions & flushes 2023-07-24 18:10:53,333 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,333 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,333 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. after waiting 0 ms 2023-07-24 18:10:53,333 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,333 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,333 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 478d9fc727054514b981dadffdc113da: 2023-07-24 18:10:53,333 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => e8f641bd7525d69f8bf483b38e8c9038, NAME => 'Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp 2023-07-24 18:10:53,339 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,339 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 01b3764993ad038e9da8406f6ff18566, disabling compactions & flushes 2023-07-24 18:10:53,339 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,339 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,339 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. after waiting 0 ms 2023-07-24 18:10:53,339 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,339 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,339 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 01b3764993ad038e9da8406f6ff18566: 2023-07-24 18:10:53,350 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,350 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 0585fec20b1415c488977ce859dde9c2, disabling compactions & flushes 2023-07-24 18:10:53,350 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,350 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,350 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. after waiting 0 ms 2023-07-24 18:10:53,350 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,350 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,350 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 0585fec20b1415c488977ce859dde9c2: 2023-07-24 18:10:53,353 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,353 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing e8f641bd7525d69f8bf483b38e8c9038, disabling compactions & flushes 2023-07-24 18:10:53,353 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,353 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,354 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. after waiting 0 ms 2023-07-24 18:10:53,354 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,354 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,354 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for e8f641bd7525d69f8bf483b38e8c9038: 2023-07-24 18:10:53,356 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:53,357 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222253357"}]},"ts":"1690222253357"} 2023-07-24 18:10:53,357 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222253357"}]},"ts":"1690222253357"} 2023-07-24 18:10:53,357 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222253357"}]},"ts":"1690222253357"} 2023-07-24 18:10:53,357 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222253357"}]},"ts":"1690222253357"} 2023-07-24 18:10:53,357 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222253357"}]},"ts":"1690222253357"} 2023-07-24 18:10:53,359 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-24 18:10:53,360 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:53,360 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222253360"}]},"ts":"1690222253360"} 2023-07-24 18:10:53,361 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-24 18:10:53,365 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:53,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 18:10:53,365 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:53,365 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:53,365 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:53,365 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, ASSIGN}] 2023-07-24 18:10:53,367 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, ASSIGN 2023-07-24 18:10:53,367 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, ASSIGN 2023-07-24 18:10:53,367 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, ASSIGN 2023-07-24 18:10:53,368 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, ASSIGN 2023-07-24 18:10:53,368 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, ASSIGN 2023-07-24 18:10:53,368 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:53,368 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:53,368 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:53,368 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42261,1690222228228; forceNewPlan=false, retain=false 2023-07-24 18:10:53,368 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46109,1690222228457; forceNewPlan=false, retain=false 2023-07-24 18:10:53,518 INFO [jenkins-hbase4:46543] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-24 18:10:53,522 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=e8f641bd7525d69f8bf483b38e8c9038, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,522 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=01b3764993ad038e9da8406f6ff18566, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,522 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253522"}]},"ts":"1690222253522"} 2023-07-24 18:10:53,522 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=9a1ba048b11f75a31d1d51408f70547f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,522 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=478d9fc727054514b981dadffdc113da, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:53,522 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=0585fec20b1415c488977ce859dde9c2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:53,522 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253522"}]},"ts":"1690222253522"} 2023-07-24 18:10:53,522 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253522"}]},"ts":"1690222253522"} 2023-07-24 18:10:53,522 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253522"}]},"ts":"1690222253522"} 2023-07-24 18:10:53,522 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253522"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253522"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253522"}]},"ts":"1690222253522"} 2023-07-24 18:10:53,523 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=137, state=RUNNABLE; OpenRegionProcedure e8f641bd7525d69f8bf483b38e8c9038, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:53,524 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=133, state=RUNNABLE; OpenRegionProcedure 478d9fc727054514b981dadffdc113da, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:53,525 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=136, state=RUNNABLE; OpenRegionProcedure 0585fec20b1415c488977ce859dde9c2, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:53,526 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=134, state=RUNNABLE; OpenRegionProcedure 9a1ba048b11f75a31d1d51408f70547f, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:53,527 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=135, state=RUNNABLE; OpenRegionProcedure 01b3764993ad038e9da8406f6ff18566, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:53,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 18:10:53,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,679 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e8f641bd7525d69f8bf483b38e8c9038, NAME => 'Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0585fec20b1415c488977ce859dde9c2, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,682 INFO [StoreOpener-0585fec20b1415c488977ce859dde9c2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,682 INFO [StoreOpener-e8f641bd7525d69f8bf483b38e8c9038-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,683 DEBUG [StoreOpener-0585fec20b1415c488977ce859dde9c2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/f 2023-07-24 18:10:53,683 DEBUG [StoreOpener-0585fec20b1415c488977ce859dde9c2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/f 2023-07-24 18:10:53,683 DEBUG [StoreOpener-e8f641bd7525d69f8bf483b38e8c9038-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/f 2023-07-24 18:10:53,684 DEBUG [StoreOpener-e8f641bd7525d69f8bf483b38e8c9038-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/f 2023-07-24 18:10:53,684 INFO [StoreOpener-0585fec20b1415c488977ce859dde9c2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0585fec20b1415c488977ce859dde9c2 columnFamilyName f 2023-07-24 18:10:53,684 INFO [StoreOpener-e8f641bd7525d69f8bf483b38e8c9038-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e8f641bd7525d69f8bf483b38e8c9038 columnFamilyName f 2023-07-24 18:10:53,684 INFO [StoreOpener-0585fec20b1415c488977ce859dde9c2-1] regionserver.HStore(310): Store=0585fec20b1415c488977ce859dde9c2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:53,685 INFO [StoreOpener-e8f641bd7525d69f8bf483b38e8c9038-1] regionserver.HStore(310): Store=e8f641bd7525d69f8bf483b38e8c9038/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:53,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,686 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,686 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:53,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:53,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:53,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0585fec20b1415c488977ce859dde9c2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10859096320, jitterRate=0.011332154273986816}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:53,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0585fec20b1415c488977ce859dde9c2: 2023-07-24 18:10:53,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:53,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2., pid=140, masterSystemTime=1690222253676 2023-07-24 18:10:53,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e8f641bd7525d69f8bf483b38e8c9038; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11409600800, jitterRate=0.06260187923908234}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:53,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e8f641bd7525d69f8bf483b38e8c9038: 2023-07-24 18:10:53,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038., pid=138, masterSystemTime=1690222253675 2023-07-24 18:10:53,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:53,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 478d9fc727054514b981dadffdc113da, NAME => 'Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-24 18:10:53,697 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=0585fec20b1415c488977ce859dde9c2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:53,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,697 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253697"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222253697"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222253697"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222253697"}]},"ts":"1690222253697"} 2023-07-24 18:10:53,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,699 INFO [StoreOpener-478d9fc727054514b981dadffdc113da-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:53,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 01b3764993ad038e9da8406f6ff18566, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-24 18:10:53,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,700 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=e8f641bd7525d69f8bf483b38e8c9038, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,700 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222253700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222253700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222253700"}]},"ts":"1690222253700"} 2023-07-24 18:10:53,701 INFO [StoreOpener-01b3764993ad038e9da8406f6ff18566-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,702 DEBUG [StoreOpener-478d9fc727054514b981dadffdc113da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/f 2023-07-24 18:10:53,702 DEBUG [StoreOpener-478d9fc727054514b981dadffdc113da-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/f 2023-07-24 18:10:53,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=136 2023-07-24 18:10:53,702 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=136, state=SUCCESS; OpenRegionProcedure 0585fec20b1415c488977ce859dde9c2, server=jenkins-hbase4.apache.org,46109,1690222228457 in 173 msec 2023-07-24 18:10:53,703 INFO [StoreOpener-478d9fc727054514b981dadffdc113da-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 478d9fc727054514b981dadffdc113da columnFamilyName f 2023-07-24 18:10:53,704 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, ASSIGN in 337 msec 2023-07-24 18:10:53,704 INFO [StoreOpener-478d9fc727054514b981dadffdc113da-1] regionserver.HStore(310): Store=478d9fc727054514b981dadffdc113da/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:53,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=137 2023-07-24 18:10:53,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=137, state=SUCCESS; OpenRegionProcedure e8f641bd7525d69f8bf483b38e8c9038, server=jenkins-hbase4.apache.org,42261,1690222228228 in 178 msec 2023-07-24 18:10:53,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,705 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, ASSIGN in 339 msec 2023-07-24 18:10:53,705 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,706 DEBUG [StoreOpener-01b3764993ad038e9da8406f6ff18566-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/f 2023-07-24 18:10:53,706 DEBUG [StoreOpener-01b3764993ad038e9da8406f6ff18566-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/f 2023-07-24 18:10:53,706 INFO [StoreOpener-01b3764993ad038e9da8406f6ff18566-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 01b3764993ad038e9da8406f6ff18566 columnFamilyName f 2023-07-24 18:10:53,707 INFO [StoreOpener-01b3764993ad038e9da8406f6ff18566-1] regionserver.HStore(310): Store=01b3764993ad038e9da8406f6ff18566/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:53,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:53,708 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:53,711 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 478d9fc727054514b981dadffdc113da; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11652557280, jitterRate=0.08522896468639374}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:53,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:53,711 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 478d9fc727054514b981dadffdc113da: 2023-07-24 18:10:53,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da., pid=139, masterSystemTime=1690222253676 2023-07-24 18:10:53,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:53,713 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=478d9fc727054514b981dadffdc113da, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:53,714 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253713"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222253713"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222253713"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222253713"}]},"ts":"1690222253713"} 2023-07-24 18:10:53,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:53,714 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 01b3764993ad038e9da8406f6ff18566; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11895573120, jitterRate=0.10786157846450806}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:53,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 01b3764993ad038e9da8406f6ff18566: 2023-07-24 18:10:53,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566., pid=142, masterSystemTime=1690222253675 2023-07-24 18:10:53,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,716 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:53,716 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,717 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=133 2023-07-24 18:10:53,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9a1ba048b11f75a31d1d51408f70547f, NAME => 'Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-24 18:10:53,717 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=133, state=SUCCESS; OpenRegionProcedure 478d9fc727054514b981dadffdc113da, server=jenkins-hbase4.apache.org,46109,1690222228457 in 191 msec 2023-07-24 18:10:53,717 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=01b3764993ad038e9da8406f6ff18566, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,717 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253717"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222253717"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222253717"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222253717"}]},"ts":"1690222253717"} 2023-07-24 18:10:53,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:53,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,718 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, ASSIGN in 352 msec 2023-07-24 18:10:53,718 INFO [StoreOpener-9a1ba048b11f75a31d1d51408f70547f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=135 2023-07-24 18:10:53,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=135, state=SUCCESS; OpenRegionProcedure 01b3764993ad038e9da8406f6ff18566, server=jenkins-hbase4.apache.org,42261,1690222228228 in 191 msec 2023-07-24 18:10:53,720 DEBUG [StoreOpener-9a1ba048b11f75a31d1d51408f70547f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/f 2023-07-24 18:10:53,720 DEBUG [StoreOpener-9a1ba048b11f75a31d1d51408f70547f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/f 2023-07-24 18:10:53,720 INFO [StoreOpener-9a1ba048b11f75a31d1d51408f70547f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9a1ba048b11f75a31d1d51408f70547f columnFamilyName f 2023-07-24 18:10:53,721 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, ASSIGN in 355 msec 2023-07-24 18:10:53,721 INFO [StoreOpener-9a1ba048b11f75a31d1d51408f70547f-1] regionserver.HStore(310): Store=9a1ba048b11f75a31d1d51408f70547f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:53,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:53,726 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:53,727 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9a1ba048b11f75a31d1d51408f70547f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9889644000, jitterRate=-0.07895512878894806}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:53,727 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9a1ba048b11f75a31d1d51408f70547f: 2023-07-24 18:10:53,728 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f., pid=141, masterSystemTime=1690222253675 2023-07-24 18:10:53,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,729 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:53,730 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=9a1ba048b11f75a31d1d51408f70547f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,730 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253729"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222253729"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222253729"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222253729"}]},"ts":"1690222253729"} 2023-07-24 18:10:53,732 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=134 2023-07-24 18:10:53,733 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=134, state=SUCCESS; OpenRegionProcedure 9a1ba048b11f75a31d1d51408f70547f, server=jenkins-hbase4.apache.org,42261,1690222228228 in 205 msec 2023-07-24 18:10:53,734 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-24 18:10:53,734 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, ASSIGN in 368 msec 2023-07-24 18:10:53,734 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:53,735 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222253735"}]},"ts":"1690222253735"} 2023-07-24 18:10:53,736 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-24 18:10:53,739 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:53,740 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 480 msec 2023-07-24 18:10:53,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-24 18:10:53,867 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-24 18:10:53,867 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-24 18:10:53,867 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,871 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-24 18:10:53,871 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,871 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-24 18:10:53,872 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:53,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 18:10:53,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:53,879 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 18:10:53,880 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 18:10:53,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:53,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 18:10:53,885 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222253885"}]},"ts":"1690222253885"} 2023-07-24 18:10:53,886 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-24 18:10:53,889 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-24 18:10:53,890 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, UNASSIGN}] 2023-07-24 18:10:53,893 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, UNASSIGN 2023-07-24 18:10:53,893 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, UNASSIGN 2023-07-24 18:10:53,893 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, UNASSIGN 2023-07-24 18:10:53,893 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, UNASSIGN 2023-07-24 18:10:53,893 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, UNASSIGN 2023-07-24 18:10:53,894 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=9a1ba048b11f75a31d1d51408f70547f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,894 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253893"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253893"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253893"}]},"ts":"1690222253893"} 2023-07-24 18:10:53,894 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=e8f641bd7525d69f8bf483b38e8c9038, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,894 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=0585fec20b1415c488977ce859dde9c2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:53,894 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253894"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253894"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253894"}]},"ts":"1690222253894"} 2023-07-24 18:10:53,894 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253894"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253894"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253894"}]},"ts":"1690222253894"} 2023-07-24 18:10:53,894 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=01b3764993ad038e9da8406f6ff18566, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:53,895 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222253894"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253894"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253894"}]},"ts":"1690222253894"} 2023-07-24 18:10:53,895 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=478d9fc727054514b981dadffdc113da, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:53,895 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222253895"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222253895"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222253895"}]},"ts":"1690222253895"} 2023-07-24 18:10:53,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure 9a1ba048b11f75a31d1d51408f70547f, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:53,896 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=148, state=RUNNABLE; CloseRegionProcedure e8f641bd7525d69f8bf483b38e8c9038, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:53,897 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=147, state=RUNNABLE; CloseRegionProcedure 0585fec20b1415c488977ce859dde9c2, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:53,898 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=146, state=RUNNABLE; CloseRegionProcedure 01b3764993ad038e9da8406f6ff18566, server=jenkins-hbase4.apache.org,42261,1690222228228}] 2023-07-24 18:10:53,899 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=144, state=RUNNABLE; CloseRegionProcedure 478d9fc727054514b981dadffdc113da, server=jenkins-hbase4.apache.org,46109,1690222228457}] 2023-07-24 18:10:53,985 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 18:10:54,048 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:54,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9a1ba048b11f75a31d1d51408f70547f, disabling compactions & flushes 2023-07-24 18:10:54,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:54,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:54,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. after waiting 0 ms 2023-07-24 18:10:54,049 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:54,051 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:54,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 478d9fc727054514b981dadffdc113da, disabling compactions & flushes 2023-07-24 18:10:54,052 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:54,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:54,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. after waiting 0 ms 2023-07-24 18:10:54,052 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:54,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:54,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f. 2023-07-24 18:10:54,055 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9a1ba048b11f75a31d1d51408f70547f: 2023-07-24 18:10:54,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:54,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da. 2023-07-24 18:10:54,056 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 478d9fc727054514b981dadffdc113da: 2023-07-24 18:10:54,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:54,057 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:54,058 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 01b3764993ad038e9da8406f6ff18566, disabling compactions & flushes 2023-07-24 18:10:54,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:54,058 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:54,058 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. after waiting 0 ms 2023-07-24 18:10:54,058 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:54,058 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=9a1ba048b11f75a31d1d51408f70547f, regionState=CLOSED 2023-07-24 18:10:54,058 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222254058"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222254058"}]},"ts":"1690222254058"} 2023-07-24 18:10:54,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 478d9fc727054514b981dadffdc113da 2023-07-24 18:10:54,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:54,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0585fec20b1415c488977ce859dde9c2, disabling compactions & flushes 2023-07-24 18:10:54,062 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:54,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:54,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. after waiting 0 ms 2023-07-24 18:10:54,062 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:54,062 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=478d9fc727054514b981dadffdc113da, regionState=CLOSED 2023-07-24 18:10:54,062 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222254062"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222254062"}]},"ts":"1690222254062"} 2023-07-24 18:10:54,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=145 2023-07-24 18:10:54,063 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=145, state=SUCCESS; CloseRegionProcedure 9a1ba048b11f75a31d1d51408f70547f, server=jenkins-hbase4.apache.org,42261,1690222228228 in 165 msec 2023-07-24 18:10:54,063 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:54,064 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566. 2023-07-24 18:10:54,064 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 01b3764993ad038e9da8406f6ff18566: 2023-07-24 18:10:54,064 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=9a1ba048b11f75a31d1d51408f70547f, UNASSIGN in 173 msec 2023-07-24 18:10:54,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:54,065 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:54,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e8f641bd7525d69f8bf483b38e8c9038, disabling compactions & flushes 2023-07-24 18:10:54,066 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:54,066 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=01b3764993ad038e9da8406f6ff18566, regionState=CLOSED 2023-07-24 18:10:54,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:54,066 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. after waiting 0 ms 2023-07-24 18:10:54,067 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222254066"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222254066"}]},"ts":"1690222254066"} 2023-07-24 18:10:54,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:54,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:54,067 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=144 2023-07-24 18:10:54,067 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=144, state=SUCCESS; CloseRegionProcedure 478d9fc727054514b981dadffdc113da, server=jenkins-hbase4.apache.org,46109,1690222228457 in 165 msec 2023-07-24 18:10:54,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2. 2023-07-24 18:10:54,068 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0585fec20b1415c488977ce859dde9c2: 2023-07-24 18:10:54,068 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=478d9fc727054514b981dadffdc113da, UNASSIGN in 177 msec 2023-07-24 18:10:54,069 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:54,069 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=0585fec20b1415c488977ce859dde9c2, regionState=CLOSED 2023-07-24 18:10:54,069 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690222254069"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222254069"}]},"ts":"1690222254069"} 2023-07-24 18:10:54,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=146 2023-07-24 18:10:54,070 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=146, state=SUCCESS; CloseRegionProcedure 01b3764993ad038e9da8406f6ff18566, server=jenkins-hbase4.apache.org,42261,1690222228228 in 170 msec 2023-07-24 18:10:54,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:54,071 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038. 2023-07-24 18:10:54,071 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e8f641bd7525d69f8bf483b38e8c9038: 2023-07-24 18:10:54,071 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=01b3764993ad038e9da8406f6ff18566, UNASSIGN in 180 msec 2023-07-24 18:10:54,072 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:54,072 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=147 2023-07-24 18:10:54,072 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=147, state=SUCCESS; CloseRegionProcedure 0585fec20b1415c488977ce859dde9c2, server=jenkins-hbase4.apache.org,46109,1690222228457 in 174 msec 2023-07-24 18:10:54,073 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=e8f641bd7525d69f8bf483b38e8c9038, regionState=CLOSED 2023-07-24 18:10:54,073 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690222254073"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222254073"}]},"ts":"1690222254073"} 2023-07-24 18:10:54,074 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0585fec20b1415c488977ce859dde9c2, UNASSIGN in 182 msec 2023-07-24 18:10:54,076 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=148 2023-07-24 18:10:54,076 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=148, state=SUCCESS; CloseRegionProcedure e8f641bd7525d69f8bf483b38e8c9038, server=jenkins-hbase4.apache.org,42261,1690222228228 in 178 msec 2023-07-24 18:10:54,078 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=143 2023-07-24 18:10:54,078 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=e8f641bd7525d69f8bf483b38e8c9038, UNASSIGN in 186 msec 2023-07-24 18:10:54,078 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222254078"}]},"ts":"1690222254078"} 2023-07-24 18:10:54,079 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-24 18:10:54,081 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-24 18:10:54,083 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 202 msec 2023-07-24 18:10:54,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-24 18:10:54,186 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-24 18:10:54,186 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:54,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-24 18:10:54,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1308945965, current retry=0 2023-07-24 18:10:54,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1308945965. 2023-07-24 18:10:54,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:54,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-24 18:10:54,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:10:54,199 INFO [Listener at localhost/39007] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-24 18:10:54,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-24 18:10:54,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:54,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:42402 deadline: 1690222314199, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-24 18:10:54,200 DEBUG [Listener at localhost/39007] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-24 18:10:54,201 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-24 18:10:54,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:54,203 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:54,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1308945965' 2023-07-24 18:10:54,204 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:54,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:54,210 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da 2023-07-24 18:10:54,211 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:54,211 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:54,211 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:54,211 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:54,214 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/recovered.edits] 2023-07-24 18:10:54,214 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/recovered.edits] 2023-07-24 18:10:54,214 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/recovered.edits] 2023-07-24 18:10:54,215 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/recovered.edits] 2023-07-24 18:10:54,215 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/f, FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/recovered.edits] 2023-07-24 18:10:54,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-24 18:10:54,226 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f/recovered.edits/4.seqid 2023-07-24 18:10:54,227 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2/recovered.edits/4.seqid 2023-07-24 18:10:54,227 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/9a1ba048b11f75a31d1d51408f70547f 2023-07-24 18:10:54,227 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038/recovered.edits/4.seqid 2023-07-24 18:10:54,227 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566/recovered.edits/4.seqid 2023-07-24 18:10:54,228 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/recovered.edits/4.seqid to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/archive/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da/recovered.edits/4.seqid 2023-07-24 18:10:54,228 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/0585fec20b1415c488977ce859dde9c2 2023-07-24 18:10:54,229 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/e8f641bd7525d69f8bf483b38e8c9038 2023-07-24 18:10:54,229 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/01b3764993ad038e9da8406f6ff18566 2023-07-24 18:10:54,229 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/.tmp/data/default/Group_testDisabledTableMove/478d9fc727054514b981dadffdc113da 2023-07-24 18:10:54,229 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-24 18:10:54,231 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:54,233 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-24 18:10:54,238 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-24 18:10:54,239 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:54,239 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-24 18:10:54,240 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222254240"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:54,240 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222254240"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:54,240 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222254240"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:54,240 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222254240"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:54,240 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222254240"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:54,241 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-24 18:10:54,242 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 478d9fc727054514b981dadffdc113da, NAME => 'Group_testDisabledTableMove,,1690222253258.478d9fc727054514b981dadffdc113da.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 9a1ba048b11f75a31d1d51408f70547f, NAME => 'Group_testDisabledTableMove,aaaaa,1690222253258.9a1ba048b11f75a31d1d51408f70547f.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 01b3764993ad038e9da8406f6ff18566, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690222253258.01b3764993ad038e9da8406f6ff18566.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 0585fec20b1415c488977ce859dde9c2, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690222253258.0585fec20b1415c488977ce859dde9c2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => e8f641bd7525d69f8bf483b38e8c9038, NAME => 'Group_testDisabledTableMove,zzzzz,1690222253258.e8f641bd7525d69f8bf483b38e8c9038.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-24 18:10:54,242 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-24 18:10:54,242 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222254242"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:54,243 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-24 18:10:54,245 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-24 18:10:54,246 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 44 msec 2023-07-24 18:10:54,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-24 18:10:54,318 INFO [Listener at localhost/39007] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-24 18:10:54,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:54,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:54,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:54,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159] to rsgroup default 2023-07-24 18:10:54,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,326 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:10:54,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1308945965, current retry=0 2023-07-24 18:10:54,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,34389,1690222232023, jenkins-hbase4.apache.org,40159,1690222227976] are moved back to Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1308945965 => default 2023-07-24 18:10:54,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:54,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1308945965 2023-07-24 18:10:54,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,332 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:10:54,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:54,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:54,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:54,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:54,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:54,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:54,336 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:54,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:54,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:54,344 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:54,345 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:54,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:54,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:54,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:54,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:54,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223454354, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:54,355 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:54,357 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:54,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,358 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:54,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:54,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:54,377 INFO [Listener at localhost/39007] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512 (was 509) Potentially hanging thread: hconnection-0x6043b73e-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-544298035_17 at /127.0.0.1:45298 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-111891419_17 at /127.0.0.1:45314 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2f235fbd-shared-pool-27 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=803 (was 771) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=576 (was 591), ProcessCount=177 (was 177), AvailableMemoryMB=5402 (was 5409) 2023-07-24 18:10:54,377 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 18:10:54,398 INFO [Listener at localhost/39007] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=512, OpenFileDescriptor=803, MaxFileDescriptor=60000, SystemLoadAverage=576, ProcessCount=177, AvailableMemoryMB=5401 2023-07-24 18:10:54,398 WARN [Listener at localhost/39007] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-24 18:10:54,398 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-24 18:10:54,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:10:54,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:10:54,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:10:54,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:10:54,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:10:54,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:10:54,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:10:54,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:10:54,413 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:10:54,414 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:10:54,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:54,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:10:54,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:10:54,422 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:10:54,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,424 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:46543] to rsgroup master 2023-07-24 18:10:54,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:10:54,426 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:42402 deadline: 1690223454426, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. 2023-07-24 18:10:54,427 WARN [Listener at localhost/39007] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:46543 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:10:54,428 INFO [Listener at localhost/39007] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:54,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:54,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:54,429 INFO [Listener at localhost/39007] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:34389, jenkins-hbase4.apache.org:40159, jenkins-hbase4.apache.org:42261, jenkins-hbase4.apache.org:46109], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:10:54,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:10:54,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46543] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:10:54,430 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 18:10:54,430 INFO [Listener at localhost/39007] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 18:10:54,430 DEBUG [Listener at localhost/39007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5c97849b to 127.0.0.1:51807 2023-07-24 18:10:54,431 DEBUG [Listener at localhost/39007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,431 DEBUG [Listener at localhost/39007] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 18:10:54,431 DEBUG [Listener at localhost/39007] util.JVMClusterUtil(257): Found active master hash=1688575689, stopped=false 2023-07-24 18:10:54,432 DEBUG [Listener at localhost/39007] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:10:54,432 DEBUG [Listener at localhost/39007] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:10:54,432 INFO [Listener at localhost/39007] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:54,434 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:54,434 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:54,434 INFO [Listener at localhost/39007] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 18:10:54,434 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:54,434 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:54,434 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:54,434 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:54,434 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:54,435 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:54,435 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:54,435 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:54,435 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:54,435 DEBUG [Listener at localhost/39007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x09645a10 to 127.0.0.1:51807 2023-07-24 18:10:54,435 DEBUG [Listener at localhost/39007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,435 INFO [Listener at localhost/39007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40159,1690222227976' ***** 2023-07-24 18:10:54,435 INFO [Listener at localhost/39007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:54,436 INFO [Listener at localhost/39007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42261,1690222228228' ***** 2023-07-24 18:10:54,436 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:54,436 INFO [Listener at localhost/39007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:54,436 INFO [Listener at localhost/39007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46109,1690222228457' ***** 2023-07-24 18:10:54,436 INFO [Listener at localhost/39007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:54,436 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:54,436 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:54,436 INFO [Listener at localhost/39007] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34389,1690222232023' ***** 2023-07-24 18:10:54,442 INFO [Listener at localhost/39007] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:10:54,445 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:54,454 INFO [RS:1;jenkins-hbase4:42261] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4bc6a9e2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:54,454 INFO [RS:2;jenkins-hbase4:46109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6c3aed70{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:54,454 INFO [RS:0;jenkins-hbase4:40159] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2068cbfe{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:54,454 INFO [RS:3;jenkins-hbase4:34389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@591c5ecf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:54,458 INFO [RS:3;jenkins-hbase4:34389] server.AbstractConnector(383): Stopped ServerConnector@157f8446{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:54,458 INFO [RS:0;jenkins-hbase4:40159] server.AbstractConnector(383): Stopped ServerConnector@2cb4cda5{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:54,458 INFO [RS:3;jenkins-hbase4:34389] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:54,458 INFO [RS:2;jenkins-hbase4:46109] server.AbstractConnector(383): Stopped ServerConnector@6c3f6670{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:54,459 INFO [RS:3;jenkins-hbase4:34389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@552c5220{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:54,458 INFO [RS:0;jenkins-hbase4:40159] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:54,459 INFO [RS:2;jenkins-hbase4:46109] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:54,458 INFO [RS:1;jenkins-hbase4:42261] server.AbstractConnector(383): Stopped ServerConnector@6b7406ba{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:54,461 INFO [RS:0;jenkins-hbase4:40159] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@b134c3c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:54,460 INFO [RS:3;jenkins-hbase4:34389] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41aa49ad{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:54,462 INFO [RS:1;jenkins-hbase4:42261] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:54,462 INFO [RS:2;jenkins-hbase4:46109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5857c9af{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:54,463 INFO [RS:1;jenkins-hbase4:42261] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@454bace{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:54,464 INFO [RS:2;jenkins-hbase4:46109] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@709df1b3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:54,464 INFO [RS:1;jenkins-hbase4:42261] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@34863637{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:54,462 INFO [RS:0;jenkins-hbase4:40159] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@705c29b1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:54,467 INFO [RS:0;jenkins-hbase4:40159] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:54,467 INFO [RS:1;jenkins-hbase4:42261] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:54,467 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:54,467 INFO [RS:0;jenkins-hbase4:40159] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:54,467 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:54,468 INFO [RS:0;jenkins-hbase4:40159] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:54,468 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:54,468 DEBUG [RS:0;jenkins-hbase4:40159] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x79caa47e to 127.0.0.1:51807 2023-07-24 18:10:54,468 DEBUG [RS:0;jenkins-hbase4:40159] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,468 INFO [RS:1;jenkins-hbase4:42261] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:54,468 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40159,1690222227976; all regions closed. 2023-07-24 18:10:54,468 INFO [RS:1;jenkins-hbase4:42261] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:54,468 INFO [RS:3;jenkins-hbase4:34389] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:54,468 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(3305): Received CLOSE for d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:54,469 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:54,469 INFO [RS:2;jenkins-hbase4:46109] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:10:54,469 INFO [RS:3;jenkins-hbase4:34389] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:54,469 INFO [RS:2;jenkins-hbase4:46109] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:10:54,469 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:10:54,469 INFO [RS:2;jenkins-hbase4:46109] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:54,469 INFO [RS:3;jenkins-hbase4:34389] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:10:54,469 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(3305): Received CLOSE for b87f0cecb71881f8123dd940c9454207 2023-07-24 18:10:54,469 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:54,470 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(3305): Received CLOSE for 7a7f564afa8892e109c3421f089102f9 2023-07-24 18:10:54,470 DEBUG [RS:1;jenkins-hbase4:42261] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3523a468 to 127.0.0.1:51807 2023-07-24 18:10:54,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d18956f41694b9a75bceceb81de91192, disabling compactions & flushes 2023-07-24 18:10:54,469 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:54,473 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:54,473 DEBUG [RS:3;jenkins-hbase4:34389] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x13db7840 to 127.0.0.1:51807 2023-07-24 18:10:54,471 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b87f0cecb71881f8123dd940c9454207, disabling compactions & flushes 2023-07-24 18:10:54,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:54,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:54,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. after waiting 0 ms 2023-07-24 18:10:54,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:54,474 DEBUG [RS:3;jenkins-hbase4:34389] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,474 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34389,1690222232023; all regions closed. 2023-07-24 18:10:54,471 DEBUG [RS:1;jenkins-hbase4:42261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,474 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:10:54,470 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(3305): Received CLOSE for f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:54,475 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:54,475 DEBUG [RS:2;jenkins-hbase4:46109] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x59589414 to 127.0.0.1:51807 2023-07-24 18:10:54,475 DEBUG [RS:2;jenkins-hbase4:46109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,475 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1478): Online Regions={d18956f41694b9a75bceceb81de91192=testRename,,1690222247652.d18956f41694b9a75bceceb81de91192.} 2023-07-24 18:10:54,473 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:54,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. after waiting 0 ms 2023-07-24 18:10:54,475 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:54,475 INFO [RS:2;jenkins-hbase4:46109] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:54,475 INFO [RS:2;jenkins-hbase4:46109] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:54,475 INFO [RS:2;jenkins-hbase4:46109] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:54,476 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 18:10:54,476 DEBUG [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1504): Waiting on d18956f41694b9a75bceceb81de91192 2023-07-24 18:10:54,482 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-24 18:10:54,482 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:54,483 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1478): Online Regions={b87f0cecb71881f8123dd940c9454207=unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207., 7a7f564afa8892e109c3421f089102f9=hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9., 1588230740=hbase:meta,,1.1588230740, f78657a0e379a4435cf47a889f576b52=hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52.} 2023-07-24 18:10:54,483 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:54,483 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:54,483 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1504): Waiting on 1588230740, 7a7f564afa8892e109c3421f089102f9, b87f0cecb71881f8123dd940c9454207, f78657a0e379a4435cf47a889f576b52 2023-07-24 18:10:54,483 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:54,483 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:54,483 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=38.63 KB heapSize=63 KB 2023-07-24 18:10:54,485 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,486 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,486 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,487 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,501 DEBUG [RS:3;jenkins-hbase4:34389] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs 2023-07-24 18:10:54,502 INFO [RS:3;jenkins-hbase4:34389] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34389%2C1690222232023:(num 1690222232545) 2023-07-24 18:10:54,502 DEBUG [RS:3;jenkins-hbase4:34389] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,502 INFO [RS:3;jenkins-hbase4:34389] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,505 INFO [RS:3;jenkins-hbase4:34389] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:54,506 INFO [RS:3;jenkins-hbase4:34389] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:54,506 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:54,506 INFO [RS:3;jenkins-hbase4:34389] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:54,506 INFO [RS:3;jenkins-hbase4:34389] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:54,506 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/unmovedTable/b87f0cecb71881f8123dd940c9454207/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 18:10:54,507 DEBUG [RS:0;jenkins-hbase4:40159] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs 2023-07-24 18:10:54,507 INFO [RS:0;jenkins-hbase4:40159] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C40159%2C1690222227976:(num 1690222230517) 2023-07-24 18:10:54,507 DEBUG [RS:0;jenkins-hbase4:40159] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,507 INFO [RS:0;jenkins-hbase4:40159] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:54,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b87f0cecb71881f8123dd940c9454207: 2023-07-24 18:10:54,509 INFO [RS:0;jenkins-hbase4:40159] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:54,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690222249320.b87f0cecb71881f8123dd940c9454207. 2023-07-24 18:10:54,510 INFO [RS:0;jenkins-hbase4:40159] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:54,510 INFO [RS:3;jenkins-hbase4:34389] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34389 2023-07-24 18:10:54,510 INFO [RS:0;jenkins-hbase4:40159] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:54,510 INFO [RS:0;jenkins-hbase4:40159] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:54,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7a7f564afa8892e109c3421f089102f9, disabling compactions & flushes 2023-07-24 18:10:54,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:54,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:54,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. after waiting 0 ms 2023-07-24 18:10:54,511 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:54,510 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:54,518 INFO [RS:0;jenkins-hbase4:40159] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40159 2023-07-24 18:10:54,518 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/default/testRename/d18956f41694b9a75bceceb81de91192/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-24 18:10:54,521 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:54,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d18956f41694b9a75bceceb81de91192: 2023-07-24 18:10:54,521 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690222247652.d18956f41694b9a75bceceb81de91192. 2023-07-24 18:10:54,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/namespace/7a7f564afa8892e109c3421f089102f9/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-24 18:10:54,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:54,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7a7f564afa8892e109c3421f089102f9: 2023-07-24 18:10:54,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690222231038.7a7f564afa8892e109c3421f089102f9. 2023-07-24 18:10:54,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f78657a0e379a4435cf47a889f576b52, disabling compactions & flushes 2023-07-24 18:10:54,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:54,525 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:54,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. after waiting 0 ms 2023-07-24 18:10:54,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:54,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f78657a0e379a4435cf47a889f576b52 1/1 column families, dataSize=22.08 KB heapSize=36.54 KB 2023-07-24 18:10:54,528 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=35.70 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/info/56b478f46b124d4f816c02e24cfc9720 2023-07-24 18:10:54,535 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56b478f46b124d4f816c02e24cfc9720 2023-07-24 18:10:54,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.08 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/.tmp/m/a8786cd318e649d299d391e77f180922 2023-07-24 18:10:54,550 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a8786cd318e649d299d391e77f180922 2023-07-24 18:10:54,551 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/.tmp/m/a8786cd318e649d299d391e77f180922 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m/a8786cd318e649d299d391e77f180922 2023-07-24 18:10:54,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a8786cd318e649d299d391e77f180922 2023-07-24 18:10:54,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/m/a8786cd318e649d299d391e77f180922, entries=22, sequenceid=101, filesize=5.9 K 2023-07-24 18:10:54,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.08 KB/22614, heapSize ~36.52 KB/37400, currentSize=0 B/0 for f78657a0e379a4435cf47a889f576b52 in 37ms, sequenceid=101, compaction requested=false 2023-07-24 18:10:54,564 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 18:10:54,571 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/rsgroup/f78657a0e379a4435cf47a889f576b52/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-24 18:10:54,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:54,572 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:54,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f78657a0e379a4435cf47a889f576b52: 2023-07-24 18:10:54,572 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690222231195.f78657a0e379a4435cf47a889f576b52. 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:54,579 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34389,1690222232023 2023-07-24 18:10:54,580 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:54,580 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:54,580 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:54,580 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,40159,1690222227976 2023-07-24 18:10:54,581 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34389,1690222232023] 2023-07-24 18:10:54,581 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34389,1690222232023; numProcessing=1 2023-07-24 18:10:54,583 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34389,1690222232023 already deleted, retry=false 2023-07-24 18:10:54,583 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34389,1690222232023 expired; onlineServers=3 2023-07-24 18:10:54,583 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,40159,1690222227976] 2023-07-24 18:10:54,583 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,40159,1690222227976; numProcessing=2 2023-07-24 18:10:54,584 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,40159,1690222227976 already deleted, retry=false 2023-07-24 18:10:54,584 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,40159,1690222227976 expired; onlineServers=2 2023-07-24 18:10:54,676 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42261,1690222228228; all regions closed. 2023-07-24 18:10:54,683 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 18:10:54,684 DEBUG [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs 2023-07-24 18:10:54,684 INFO [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42261%2C1690222228228.meta:.meta(num 1690222230797) 2023-07-24 18:10:54,696 DEBUG [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs 2023-07-24 18:10:54,696 INFO [RS:1;jenkins-hbase4:42261] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42261%2C1690222228228:(num 1690222230517) 2023-07-24 18:10:54,696 DEBUG [RS:1;jenkins-hbase4:42261] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:54,696 INFO [RS:1;jenkins-hbase4:42261] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:54,697 INFO [RS:1;jenkins-hbase4:42261] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:54,697 INFO [RS:1;jenkins-hbase4:42261] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:10:54,697 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:54,697 INFO [RS:1;jenkins-hbase4:42261] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:10:54,697 INFO [RS:1;jenkins-hbase4:42261] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:10:54,698 INFO [RS:1;jenkins-hbase4:42261] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42261 2023-07-24 18:10:54,702 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:54,702 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:54,702 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42261,1690222228228 2023-07-24 18:10:54,703 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42261,1690222228228] 2023-07-24 18:10:54,703 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42261,1690222228228; numProcessing=3 2023-07-24 18:10:54,705 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42261,1690222228228 already deleted, retry=false 2023-07-24 18:10:54,705 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42261,1690222228228 expired; onlineServers=1 2023-07-24 18:10:54,883 DEBUG [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-24 18:10:54,956 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/rep_barrier/5c1128bb70f54219a966a2e9b5ddfe72 2023-07-24 18:10:54,963 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c1128bb70f54219a966a2e9b5ddfe72 2023-07-24 18:10:54,980 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/table/1e9d10e35db44e468d532b3a94c99fa3 2023-07-24 18:10:54,988 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1e9d10e35db44e468d532b3a94c99fa3 2023-07-24 18:10:54,989 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/info/56b478f46b124d4f816c02e24cfc9720 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info/56b478f46b124d4f816c02e24cfc9720 2023-07-24 18:10:54,996 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56b478f46b124d4f816c02e24cfc9720 2023-07-24 18:10:54,996 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/info/56b478f46b124d4f816c02e24cfc9720, entries=72, sequenceid=210, filesize=13.1 K 2023-07-24 18:10:54,997 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/rep_barrier/5c1128bb70f54219a966a2e9b5ddfe72 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier/5c1128bb70f54219a966a2e9b5ddfe72 2023-07-24 18:10:55,004 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5c1128bb70f54219a966a2e9b5ddfe72 2023-07-24 18:10:55,004 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/rep_barrier/5c1128bb70f54219a966a2e9b5ddfe72, entries=8, sequenceid=210, filesize=5.8 K 2023-07-24 18:10:55,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/.tmp/table/1e9d10e35db44e468d532b3a94c99fa3 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table/1e9d10e35db44e468d532b3a94c99fa3 2023-07-24 18:10:55,012 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1e9d10e35db44e468d532b3a94c99fa3 2023-07-24 18:10:55,012 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/table/1e9d10e35db44e468d532b3a94c99fa3, entries=16, sequenceid=210, filesize=6.0 K 2023-07-24 18:10:55,013 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~38.63 KB/39552, heapSize ~62.95 KB/64464, currentSize=0 B/0 for 1588230740 in 530ms, sequenceid=210, compaction requested=false 2023-07-24 18:10:55,013 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:10:55,026 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=95 2023-07-24 18:10:55,027 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:55,028 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:55,028 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:55,028 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:55,035 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,035 INFO [RS:1;jenkins-hbase4:42261] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42261,1690222228228; zookeeper connection closed. 2023-07-24 18:10:55,035 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:42261-0x1019886e9540002, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,039 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6d0a31ed] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6d0a31ed 2023-07-24 18:10:55,084 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46109,1690222228457; all regions closed. 2023-07-24 18:10:55,090 DEBUG [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs 2023-07-24 18:10:55,090 INFO [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46109%2C1690222228457.meta:.meta(num 1690222239069) 2023-07-24 18:10:55,096 DEBUG [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/oldWALs 2023-07-24 18:10:55,096 INFO [RS:2;jenkins-hbase4:46109] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46109%2C1690222228457:(num 1690222230523) 2023-07-24 18:10:55,096 DEBUG [RS:2;jenkins-hbase4:46109] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:55,096 INFO [RS:2;jenkins-hbase4:46109] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:10:55,097 INFO [RS:2;jenkins-hbase4:46109] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:10:55,097 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:55,098 INFO [RS:2;jenkins-hbase4:46109] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46109 2023-07-24 18:10:55,099 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46109,1690222228457 2023-07-24 18:10:55,099 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:55,100 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46109,1690222228457] 2023-07-24 18:10:55,100 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46109,1690222228457; numProcessing=4 2023-07-24 18:10:55,102 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46109,1690222228457 already deleted, retry=false 2023-07-24 18:10:55,102 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46109,1690222228457 expired; onlineServers=0 2023-07-24 18:10:55,102 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46543,1690222225966' ***** 2023-07-24 18:10:55,102 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 18:10:55,103 DEBUG [M:0;jenkins-hbase4:46543] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@259894f2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:55,103 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:10:55,105 INFO [M:0;jenkins-hbase4:46543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@71e552e9{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:10:55,106 INFO [M:0;jenkins-hbase4:46543] server.AbstractConnector(383): Stopped ServerConnector@7f8b825b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:55,106 INFO [M:0;jenkins-hbase4:46543] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:10:55,106 INFO [M:0;jenkins-hbase4:46543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4950e91d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:10:55,107 INFO [M:0;jenkins-hbase4:46543] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b51199{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir/,STOPPED} 2023-07-24 18:10:55,107 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:55,107 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:55,107 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:55,110 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46543,1690222225966 2023-07-24 18:10:55,111 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46543,1690222225966; all regions closed. 2023-07-24 18:10:55,111 DEBUG [M:0;jenkins-hbase4:46543] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:10:55,111 INFO [M:0;jenkins-hbase4:46543] master.HMaster(1491): Stopping master jetty server 2023-07-24 18:10:55,111 INFO [M:0;jenkins-hbase4:46543] server.AbstractConnector(383): Stopped ServerConnector@5b1cb69d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:10:55,112 DEBUG [M:0;jenkins-hbase4:46543] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 18:10:55,112 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 18:10:55,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222230111] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222230111,5,FailOnTimeoutGroup] 2023-07-24 18:10:55,112 DEBUG [M:0;jenkins-hbase4:46543] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 18:10:55,112 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222230108] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222230108,5,FailOnTimeoutGroup] 2023-07-24 18:10:55,112 INFO [M:0;jenkins-hbase4:46543] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 18:10:55,112 INFO [M:0;jenkins-hbase4:46543] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 18:10:55,112 INFO [M:0;jenkins-hbase4:46543] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 18:10:55,112 DEBUG [M:0;jenkins-hbase4:46543] master.HMaster(1512): Stopping service threads 2023-07-24 18:10:55,112 INFO [M:0;jenkins-hbase4:46543] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 18:10:55,113 ERROR [M:0;jenkins-hbase4:46543] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-24 18:10:55,113 INFO [M:0;jenkins-hbase4:46543] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 18:10:55,114 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 18:10:55,115 DEBUG [M:0;jenkins-hbase4:46543] zookeeper.ZKUtil(398): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 18:10:55,115 WARN [M:0;jenkins-hbase4:46543] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 18:10:55,115 INFO [M:0;jenkins-hbase4:46543] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 18:10:55,115 INFO [M:0;jenkins-hbase4:46543] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 18:10:55,115 DEBUG [M:0;jenkins-hbase4:46543] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:10:55,115 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:55,115 DEBUG [M:0;jenkins-hbase4:46543] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:55,115 DEBUG [M:0;jenkins-hbase4:46543] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:10:55,115 DEBUG [M:0;jenkins-hbase4:46543] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:55,115 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.23 KB heapSize=621.38 KB 2023-07-24 18:10:55,135 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,135 INFO [RS:0;jenkins-hbase4:40159] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40159,1690222227976; zookeeper connection closed. 2023-07-24 18:10:55,135 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:40159-0x1019886e9540001, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,135 INFO [M:0;jenkins-hbase4:46543] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.23 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a4ae3d477f744b24a9a5b959e19c66f9 2023-07-24 18:10:55,136 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4fd64c33] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4fd64c33 2023-07-24 18:10:55,141 DEBUG [M:0;jenkins-hbase4:46543] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/a4ae3d477f744b24a9a5b959e19c66f9 as hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a4ae3d477f744b24a9a5b959e19c66f9 2023-07-24 18:10:55,151 INFO [M:0;jenkins-hbase4:46543] regionserver.HStore(1080): Added hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/a4ae3d477f744b24a9a5b959e19c66f9, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-24 18:10:55,153 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegion(2948): Finished flush of dataSize ~519.23 KB/531687, heapSize ~621.37 KB/636280, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 37ms, sequenceid=1152, compaction requested=false 2023-07-24 18:10:55,154 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:55,155 DEBUG [M:0;jenkins-hbase4:46543] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:55,160 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:10:55,160 INFO [M:0;jenkins-hbase4:46543] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 18:10:55,161 INFO [M:0;jenkins-hbase4:46543] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46543 2023-07-24 18:10:55,163 DEBUG [M:0;jenkins-hbase4:46543] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,46543,1690222225966 already deleted, retry=false 2023-07-24 18:10:55,235 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,235 INFO [RS:3;jenkins-hbase4:34389] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34389,1690222232023; zookeeper connection closed. 2023-07-24 18:10:55,235 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:34389-0x1019886e954000b, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,236 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@79cec16a] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@79cec16a 2023-07-24 18:10:55,336 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,336 INFO [M:0;jenkins-hbase4:46543] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46543,1690222225966; zookeeper connection closed. 2023-07-24 18:10:55,336 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): master:46543-0x1019886e9540000, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,436 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,436 INFO [RS:2;jenkins-hbase4:46109] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46109,1690222228457; zookeeper connection closed. 2023-07-24 18:10:55,436 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): regionserver:46109-0x1019886e9540003, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:10:55,436 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6bf85731] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6bf85731 2023-07-24 18:10:55,436 INFO [Listener at localhost/39007] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 18:10:55,437 WARN [Listener at localhost/39007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:10:55,441 INFO [Listener at localhost/39007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:10:55,452 WARN [BP-802604675-172.31.14.131-1690222222036 heartbeating to localhost/127.0.0.1:44625] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:10:55,452 WARN [BP-802604675-172.31.14.131-1690222222036 heartbeating to localhost/127.0.0.1:44625] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-802604675-172.31.14.131-1690222222036 (Datanode Uuid 766ae196-7e07-47fa-950c-c13d89ace784) service to localhost/127.0.0.1:44625 2023-07-24 18:10:55,454 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data5/current/BP-802604675-172.31.14.131-1690222222036] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:10:55,455 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data6/current/BP-802604675-172.31.14.131-1690222222036] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:10:55,457 WARN [Listener at localhost/39007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:10:55,463 INFO [Listener at localhost/39007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:10:55,466 WARN [BP-802604675-172.31.14.131-1690222222036 heartbeating to localhost/127.0.0.1:44625] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:10:55,466 WARN [BP-802604675-172.31.14.131-1690222222036 heartbeating to localhost/127.0.0.1:44625] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-802604675-172.31.14.131-1690222222036 (Datanode Uuid 965e7a12-6020-438c-a67f-6b9609687952) service to localhost/127.0.0.1:44625 2023-07-24 18:10:55,466 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data3/current/BP-802604675-172.31.14.131-1690222222036] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:10:55,467 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data4/current/BP-802604675-172.31.14.131-1690222222036] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:10:55,468 WARN [Listener at localhost/39007] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:10:55,476 INFO [Listener at localhost/39007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:10:55,579 WARN [BP-802604675-172.31.14.131-1690222222036 heartbeating to localhost/127.0.0.1:44625] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:10:55,579 WARN [BP-802604675-172.31.14.131-1690222222036 heartbeating to localhost/127.0.0.1:44625] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-802604675-172.31.14.131-1690222222036 (Datanode Uuid 0d6b3055-580e-49a4-aae1-e80de5350415) service to localhost/127.0.0.1:44625 2023-07-24 18:10:55,579 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data1/current/BP-802604675-172.31.14.131-1690222222036] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:10:55,580 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/cluster_69375119-9604-67c0-2612-a2a1777f31d1/dfs/data/data2/current/BP-802604675-172.31.14.131-1690222222036] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:10:55,607 INFO [Listener at localhost/39007] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:10:55,728 INFO [Listener at localhost/39007] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 18:10:55,787 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 18:10:55,787 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.log.dir so I do NOT create it in target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5ed9cb37-cef2-8395-1e6a-d53259c56c45/hadoop.tmp.dir so I do NOT create it in target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9, deleteOnExit=true 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/test.cache.data in system properties and HBase conf 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 18:10:55,788 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir in system properties and HBase conf 2023-07-24 18:10:55,789 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 18:10:55,789 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 18:10:55,789 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 18:10:55,789 DEBUG [Listener at localhost/39007] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 18:10:55,789 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:10:55,789 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:10:55,789 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/nfs.dump.dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:10:55,790 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 18:10:55,791 INFO [Listener at localhost/39007] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 18:10:55,795 WARN [Listener at localhost/39007] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:10:55,795 WARN [Listener at localhost/39007] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:10:55,824 DEBUG [Listener at localhost/39007-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1019886e954000a, quorum=127.0.0.1:51807, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 18:10:55,824 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1019886e954000a, quorum=127.0.0.1:51807, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 18:10:55,841 WARN [Listener at localhost/39007] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:55,844 INFO [Listener at localhost/39007] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:55,849 INFO [Listener at localhost/39007] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/Jetty_localhost_33475_hdfs____.m6g61w/webapp 2023-07-24 18:10:55,951 INFO [Listener at localhost/39007] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33475 2023-07-24 18:10:55,956 WARN [Listener at localhost/39007] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:10:55,956 WARN [Listener at localhost/39007] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:10:56,003 WARN [Listener at localhost/40065] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:56,019 WARN [Listener at localhost/40065] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:56,021 WARN [Listener at localhost/40065] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:56,022 INFO [Listener at localhost/40065] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:56,027 INFO [Listener at localhost/40065] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/Jetty_localhost_34925_datanode____.1o5q3i/webapp 2023-07-24 18:10:56,125 INFO [Listener at localhost/40065] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34925 2023-07-24 18:10:56,132 WARN [Listener at localhost/43529] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:56,150 WARN [Listener at localhost/43529] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:56,152 WARN [Listener at localhost/43529] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:56,153 INFO [Listener at localhost/43529] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:56,156 INFO [Listener at localhost/43529] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/Jetty_localhost_39881_datanode____.sjn0cu/webapp 2023-07-24 18:10:56,158 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:10:56,159 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:10:56,159 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:10:56,244 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbe17aa89bc25099: Processing first storage report for DS-471732e8-5199-4594-b5ec-f85b7e7953fa from datanode ab93f2a9-b4e2-4ada-aabb-29ab6837313a 2023-07-24 18:10:56,245 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbe17aa89bc25099: from storage DS-471732e8-5199-4594-b5ec-f85b7e7953fa node DatanodeRegistration(127.0.0.1:41817, datanodeUuid=ab93f2a9-b4e2-4ada-aabb-29ab6837313a, infoPort=33499, infoSecurePort=0, ipcPort=43529, storageInfo=lv=-57;cid=testClusterID;nsid=340534307;c=1690222255799), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 18:10:56,245 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbe17aa89bc25099: Processing first storage report for DS-24d81e91-1d1c-4390-8077-c4ef0bebaf49 from datanode ab93f2a9-b4e2-4ada-aabb-29ab6837313a 2023-07-24 18:10:56,245 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbe17aa89bc25099: from storage DS-24d81e91-1d1c-4390-8077-c4ef0bebaf49 node DatanodeRegistration(127.0.0.1:41817, datanodeUuid=ab93f2a9-b4e2-4ada-aabb-29ab6837313a, infoPort=33499, infoSecurePort=0, ipcPort=43529, storageInfo=lv=-57;cid=testClusterID;nsid=340534307;c=1690222255799), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:56,266 INFO [Listener at localhost/43529] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39881 2023-07-24 18:10:56,275 WARN [Listener at localhost/46613] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:56,299 WARN [Listener at localhost/46613] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:10:56,302 WARN [Listener at localhost/46613] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:10:56,303 INFO [Listener at localhost/46613] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:10:56,307 INFO [Listener at localhost/46613] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/Jetty_localhost_36543_datanode____gram54/webapp 2023-07-24 18:10:56,396 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd7803dffb10e0ae1: Processing first storage report for DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626 from datanode 18450c47-9ac8-4411-b73a-05a92e967192 2023-07-24 18:10:56,396 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd7803dffb10e0ae1: from storage DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626 node DatanodeRegistration(127.0.0.1:39177, datanodeUuid=18450c47-9ac8-4411-b73a-05a92e967192, infoPort=43859, infoSecurePort=0, ipcPort=46613, storageInfo=lv=-57;cid=testClusterID;nsid=340534307;c=1690222255799), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:56,396 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd7803dffb10e0ae1: Processing first storage report for DS-bc897d3e-a081-48c8-b4ca-86c3404dcb93 from datanode 18450c47-9ac8-4411-b73a-05a92e967192 2023-07-24 18:10:56,396 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd7803dffb10e0ae1: from storage DS-bc897d3e-a081-48c8-b4ca-86c3404dcb93 node DatanodeRegistration(127.0.0.1:39177, datanodeUuid=18450c47-9ac8-4411-b73a-05a92e967192, infoPort=43859, infoSecurePort=0, ipcPort=46613, storageInfo=lv=-57;cid=testClusterID;nsid=340534307;c=1690222255799), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:56,420 INFO [Listener at localhost/46613] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36543 2023-07-24 18:10:56,432 WARN [Listener at localhost/42673] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:10:56,552 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2bf80a673b582a99: Processing first storage report for DS-2d36ec2d-8176-4beb-8475-460d45cc35d4 from datanode 39cd6a77-5cd2-4047-9606-487508b6b975 2023-07-24 18:10:56,552 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2bf80a673b582a99: from storage DS-2d36ec2d-8176-4beb-8475-460d45cc35d4 node DatanodeRegistration(127.0.0.1:46183, datanodeUuid=39cd6a77-5cd2-4047-9606-487508b6b975, infoPort=42207, infoSecurePort=0, ipcPort=42673, storageInfo=lv=-57;cid=testClusterID;nsid=340534307;c=1690222255799), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:56,552 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2bf80a673b582a99: Processing first storage report for DS-69299eca-5d6f-4b36-b3ec-314886483c83 from datanode 39cd6a77-5cd2-4047-9606-487508b6b975 2023-07-24 18:10:56,552 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2bf80a673b582a99: from storage DS-69299eca-5d6f-4b36-b3ec-314886483c83 node DatanodeRegistration(127.0.0.1:46183, datanodeUuid=39cd6a77-5cd2-4047-9606-487508b6b975, infoPort=42207, infoSecurePort=0, ipcPort=42673, storageInfo=lv=-57;cid=testClusterID;nsid=340534307;c=1690222255799), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:10:56,644 DEBUG [Listener at localhost/42673] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf 2023-07-24 18:10:56,647 INFO [Listener at localhost/42673] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/zookeeper_0, clientPort=57771, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 18:10:56,649 INFO [Listener at localhost/42673] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=57771 2023-07-24 18:10:56,649 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:56,650 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:56,669 INFO [Listener at localhost/42673] util.FSUtils(471): Created version file at hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526 with version=8 2023-07-24 18:10:56,670 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/hbase-staging 2023-07-24 18:10:56,671 DEBUG [Listener at localhost/42673] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 18:10:56,671 DEBUG [Listener at localhost/42673] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 18:10:56,671 DEBUG [Listener at localhost/42673] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 18:10:56,671 DEBUG [Listener at localhost/42673] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 18:10:56,672 INFO [Listener at localhost/42673] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:56,672 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:56,672 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:56,672 INFO [Listener at localhost/42673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:56,672 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:56,672 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:56,673 INFO [Listener at localhost/42673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:56,673 INFO [Listener at localhost/42673] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40097 2023-07-24 18:10:56,674 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:56,675 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:56,676 INFO [Listener at localhost/42673] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40097 connecting to ZooKeeper ensemble=127.0.0.1:57771 2023-07-24 18:10:56,682 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:400970x0, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:56,683 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40097-0x101988765090000 connected 2023-07-24 18:10:56,705 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:56,706 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:56,707 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:56,707 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40097 2023-07-24 18:10:56,708 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40097 2023-07-24 18:10:56,708 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40097 2023-07-24 18:10:56,708 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40097 2023-07-24 18:10:56,708 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40097 2023-07-24 18:10:56,711 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:56,711 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:56,711 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:56,712 INFO [Listener at localhost/42673] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 18:10:56,712 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:56,712 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:56,712 INFO [Listener at localhost/42673] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:56,713 INFO [Listener at localhost/42673] http.HttpServer(1146): Jetty bound to port 38315 2023-07-24 18:10:56,713 INFO [Listener at localhost/42673] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:56,723 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:56,724 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f546421{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:56,725 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:56,725 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4f11c631{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:56,845 INFO [Listener at localhost/42673] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:56,846 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:56,846 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:56,847 INFO [Listener at localhost/42673] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:56,848 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:56,849 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7ebbee1c{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/jetty-0_0_0_0-38315-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1376309700831959275/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:10:56,851 INFO [Listener at localhost/42673] server.AbstractConnector(333): Started ServerConnector@198b5911{HTTP/1.1, (http/1.1)}{0.0.0.0:38315} 2023-07-24 18:10:56,851 INFO [Listener at localhost/42673] server.Server(415): Started @36909ms 2023-07-24 18:10:56,851 INFO [Listener at localhost/42673] master.HMaster(444): hbase.rootdir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526, hbase.cluster.distributed=false 2023-07-24 18:10:56,867 INFO [Listener at localhost/42673] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:56,868 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:56,868 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:56,868 INFO [Listener at localhost/42673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:56,868 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:56,868 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:56,868 INFO [Listener at localhost/42673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:56,871 INFO [Listener at localhost/42673] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37455 2023-07-24 18:10:56,872 INFO [Listener at localhost/42673] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:56,873 DEBUG [Listener at localhost/42673] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:56,874 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:56,875 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:56,876 INFO [Listener at localhost/42673] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37455 connecting to ZooKeeper ensemble=127.0.0.1:57771 2023-07-24 18:10:56,886 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:374550x0, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:56,888 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37455-0x101988765090001 connected 2023-07-24 18:10:56,888 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:56,889 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:56,889 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:56,890 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37455 2023-07-24 18:10:56,890 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37455 2023-07-24 18:10:56,890 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37455 2023-07-24 18:10:56,891 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37455 2023-07-24 18:10:56,891 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37455 2023-07-24 18:10:56,893 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:56,893 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:56,893 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:56,894 INFO [Listener at localhost/42673] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:56,894 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:56,894 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:56,894 INFO [Listener at localhost/42673] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:56,895 INFO [Listener at localhost/42673] http.HttpServer(1146): Jetty bound to port 32899 2023-07-24 18:10:56,896 INFO [Listener at localhost/42673] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:56,899 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:56,900 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2d8a119b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:56,900 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:56,900 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@16c56918{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:57,015 INFO [Listener at localhost/42673] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:57,016 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:57,016 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:57,016 INFO [Listener at localhost/42673] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:57,017 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,018 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@536de3b9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/jetty-0_0_0_0-32899-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7629397554515574402/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:57,019 INFO [Listener at localhost/42673] server.AbstractConnector(333): Started ServerConnector@11eb531b{HTTP/1.1, (http/1.1)}{0.0.0.0:32899} 2023-07-24 18:10:57,020 INFO [Listener at localhost/42673] server.Server(415): Started @37078ms 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:57,032 INFO [Listener at localhost/42673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:57,033 INFO [Listener at localhost/42673] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34233 2023-07-24 18:10:57,034 INFO [Listener at localhost/42673] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:57,035 DEBUG [Listener at localhost/42673] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:57,036 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:57,037 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:57,038 INFO [Listener at localhost/42673] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34233 connecting to ZooKeeper ensemble=127.0.0.1:57771 2023-07-24 18:10:57,042 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:342330x0, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:57,043 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:342330x0, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:57,044 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34233-0x101988765090002 connected 2023-07-24 18:10:57,044 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:57,045 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:57,046 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34233 2023-07-24 18:10:57,046 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34233 2023-07-24 18:10:57,046 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34233 2023-07-24 18:10:57,049 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34233 2023-07-24 18:10:57,049 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34233 2023-07-24 18:10:57,051 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:57,051 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:57,051 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:57,052 INFO [Listener at localhost/42673] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:57,052 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:57,052 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:57,052 INFO [Listener at localhost/42673] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:57,053 INFO [Listener at localhost/42673] http.HttpServer(1146): Jetty bound to port 39175 2023-07-24 18:10:57,053 INFO [Listener at localhost/42673] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:57,055 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,055 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3de75ac7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:57,056 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,056 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55941c90{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:57,181 INFO [Listener at localhost/42673] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:57,182 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:57,182 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:57,182 INFO [Listener at localhost/42673] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:57,183 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,184 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6f864f73{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/jetty-0_0_0_0-39175-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4445331887459730087/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:57,185 INFO [Listener at localhost/42673] server.AbstractConnector(333): Started ServerConnector@78016663{HTTP/1.1, (http/1.1)}{0.0.0.0:39175} 2023-07-24 18:10:57,185 INFO [Listener at localhost/42673] server.Server(415): Started @37244ms 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:10:57,197 INFO [Listener at localhost/42673] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:10:57,198 INFO [Listener at localhost/42673] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41283 2023-07-24 18:10:57,198 INFO [Listener at localhost/42673] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:10:57,200 DEBUG [Listener at localhost/42673] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:10:57,200 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:57,201 INFO [Listener at localhost/42673] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:57,202 INFO [Listener at localhost/42673] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41283 connecting to ZooKeeper ensemble=127.0.0.1:57771 2023-07-24 18:10:57,206 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:412830x0, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:57,207 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:412830x0, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:10:57,208 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:412830x0, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:10:57,208 DEBUG [Listener at localhost/42673] zookeeper.ZKUtil(164): regionserver:412830x0, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:10:57,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41283-0x101988765090003 connected 2023-07-24 18:10:57,213 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41283 2023-07-24 18:10:57,214 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41283 2023-07-24 18:10:57,227 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41283 2023-07-24 18:10:57,227 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41283 2023-07-24 18:10:57,228 DEBUG [Listener at localhost/42673] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41283 2023-07-24 18:10:57,235 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:10:57,235 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:10:57,235 INFO [Listener at localhost/42673] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:10:57,236 INFO [Listener at localhost/42673] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:10:57,236 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:10:57,236 INFO [Listener at localhost/42673] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:10:57,236 INFO [Listener at localhost/42673] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:10:57,237 INFO [Listener at localhost/42673] http.HttpServer(1146): Jetty bound to port 38337 2023-07-24 18:10:57,237 INFO [Listener at localhost/42673] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:57,242 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,243 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fb867a7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:10:57,243 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,244 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@a8e6c2f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:10:57,370 INFO [Listener at localhost/42673] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:10:57,372 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:10:57,372 INFO [Listener at localhost/42673] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:10:57,372 INFO [Listener at localhost/42673] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:10:57,373 INFO [Listener at localhost/42673] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:10:57,374 INFO [Listener at localhost/42673] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2b7448e1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/java.io.tmpdir/jetty-0_0_0_0-38337-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2173413379749659211/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:10:57,376 INFO [Listener at localhost/42673] server.AbstractConnector(333): Started ServerConnector@73eda5e{HTTP/1.1, (http/1.1)}{0.0.0.0:38337} 2023-07-24 18:10:57,376 INFO [Listener at localhost/42673] server.Server(415): Started @37435ms 2023-07-24 18:10:57,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:10:57,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@255b4a1d{HTTP/1.1, (http/1.1)}{0.0.0.0:35743} 2023-07-24 18:10:57,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37443ms 2023-07-24 18:10:57,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,386 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:10:57,386 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,388 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:57,388 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:57,388 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:57,388 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:57,391 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40097,1690222256671 from backup master directory 2023-07-24 18:10:57,391 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:10:57,392 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:10:57,393 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,394 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:57,394 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:10:57,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,418 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/hbase.id with ID: a7ca4a2b-ed0c-4bd6-9a3a-714b3636cece 2023-07-24 18:10:57,434 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:10:57,438 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,481 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5022eed8 to 127.0.0.1:57771 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:57,490 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c79c332, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:57,490 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:57,491 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 18:10:57,494 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:57,496 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store-tmp 2023-07-24 18:10:57,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:57,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:10:57,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:57,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:57,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:10:57,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:57,523 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:10:57,523 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:57,524 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/WALs/jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,527 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40097%2C1690222256671, suffix=, logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/WALs/jenkins-hbase4.apache.org,40097,1690222256671, archiveDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/oldWALs, maxLogs=10 2023-07-24 18:10:57,547 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK] 2023-07-24 18:10:57,551 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK] 2023-07-24 18:10:57,551 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK] 2023-07-24 18:10:57,554 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/WALs/jenkins-hbase4.apache.org,40097,1690222256671/jenkins-hbase4.apache.org%2C40097%2C1690222256671.1690222257527 2023-07-24 18:10:57,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK], DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK]] 2023-07-24 18:10:57,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:57,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:57,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:57,555 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:57,560 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:57,562 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 18:10:57,563 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 18:10:57,563 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:57,564 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:57,565 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:57,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:10:57,578 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:57,579 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9611104160, jitterRate=-0.1048961728811264}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:57,579 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:10:57,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 18:10:57,581 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 18:10:57,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 18:10:57,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 18:10:57,582 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 18:10:57,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 18:10:57,583 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 18:10:57,584 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 18:10:57,585 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 18:10:57,587 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 18:10:57,587 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 18:10:57,588 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 18:10:57,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 18:10:57,592 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 18:10:57,594 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,594 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 18:10:57,596 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:57,596 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:57,596 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:57,596 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:10:57,596 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,597 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40097,1690222256671, sessionid=0x101988765090000, setting cluster-up flag (Was=false) 2023-07-24 18:10:57,602 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,608 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 18:10:57,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,613 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 18:10:57,624 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:57,625 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.hbase-snapshot/.tmp 2023-07-24 18:10:57,633 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 18:10:57,634 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 18:10:57,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 18:10:57,643 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:57,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 18:10:57,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-24 18:10:57,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:57,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:10:57,660 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:10:57,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:10:57,661 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:57,661 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690222287664 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 18:10:57,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 18:10:57,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,665 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:57,665 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 18:10:57,665 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 18:10:57,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 18:10:57,666 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 18:10:57,667 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:57,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 18:10:57,673 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 18:10:57,674 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222257673,5,FailOnTimeoutGroup] 2023-07-24 18:10:57,675 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222257674,5,FailOnTimeoutGroup] 2023-07-24 18:10:57,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 18:10:57,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,675 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,678 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(951): ClusterId : a7ca4a2b-ed0c-4bd6-9a3a-714b3636cece 2023-07-24 18:10:57,681 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:57,681 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(951): ClusterId : a7ca4a2b-ed0c-4bd6-9a3a-714b3636cece 2023-07-24 18:10:57,681 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(951): ClusterId : a7ca4a2b-ed0c-4bd6-9a3a-714b3636cece 2023-07-24 18:10:57,681 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:57,681 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:10:57,684 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:57,684 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:57,684 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:57,684 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:10:57,684 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:57,684 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:10:57,687 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:57,687 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:57,687 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:57,689 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ReadOnlyZKClient(139): Connect 0x1edb45f0 to 127.0.0.1:57771 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:57,689 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:57,689 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ReadOnlyZKClient(139): Connect 0x39b73e37 to 127.0.0.1:57771 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:57,690 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526 2023-07-24 18:10:57,694 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:10:57,695 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ReadOnlyZKClient(139): Connect 0x5a1d53dc to 127.0.0.1:57771 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:57,706 DEBUG [RS:2;jenkins-hbase4:41283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@101ca85b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:57,706 DEBUG [RS:1;jenkins-hbase4:34233] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3770d771, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:57,707 DEBUG [RS:1;jenkins-hbase4:34233] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@764ddd4a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:57,707 DEBUG [RS:2;jenkins-hbase4:41283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5121a260, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:57,708 DEBUG [RS:0;jenkins-hbase4:37455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54dab6ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:57,708 DEBUG [RS:0;jenkins-hbase4:37455] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50e538ac, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:10:57,717 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41283 2023-07-24 18:10:57,717 INFO [RS:2;jenkins-hbase4:41283] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:57,717 INFO [RS:2;jenkins-hbase4:41283] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:57,717 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:57,718 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40097,1690222256671 with isa=jenkins-hbase4.apache.org/172.31.14.131:41283, startcode=1690222257196 2023-07-24 18:10:57,718 DEBUG [RS:2;jenkins-hbase4:41283] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:57,718 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34233 2023-07-24 18:10:57,718 INFO [RS:1;jenkins-hbase4:34233] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:57,718 INFO [RS:1;jenkins-hbase4:34233] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:57,718 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:57,719 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40097,1690222256671 with isa=jenkins-hbase4.apache.org/172.31.14.131:34233, startcode=1690222257031 2023-07-24 18:10:57,719 DEBUG [RS:1;jenkins-hbase4:34233] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:57,719 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:57,720 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51065, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:57,722 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40097] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,722 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:57,722 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:57,722 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:10:57,722 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50145, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:57,723 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40097] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,723 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:57,723 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 18:10:57,723 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526 2023-07-24 18:10:57,723 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40065 2023-07-24 18:10:57,723 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38315 2023-07-24 18:10:57,724 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/info 2023-07-24 18:10:57,724 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:57,725 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:57,725 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ZKUtil(162): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,725 WARN [RS:1;jenkins-hbase4:34233] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:57,725 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:57,725 INFO [RS:1;jenkins-hbase4:34233] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:57,725 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:57,725 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,726 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526 2023-07-24 18:10:57,726 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40065 2023-07-24 18:10:57,727 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38315 2023-07-24 18:10:57,727 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:37455 2023-07-24 18:10:57,727 INFO [RS:0;jenkins-hbase4:37455] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:10:57,727 INFO [RS:0;jenkins-hbase4:37455] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:10:57,728 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:10:57,728 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:57,729 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34233,1690222257031] 2023-07-24 18:10:57,729 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:57,729 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40097,1690222256671 with isa=jenkins-hbase4.apache.org/172.31.14.131:37455, startcode=1690222256867 2023-07-24 18:10:57,730 DEBUG [RS:0;jenkins-hbase4:37455] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:10:57,730 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:57,730 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:57,731 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37005, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:10:57,731 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40097] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,731 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:10:57,731 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:10:57,731 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526 2023-07-24 18:10:57,731 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40065 2023-07-24 18:10:57,731 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38315 2023-07-24 18:10:57,734 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/table 2023-07-24 18:10:57,734 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:57,734 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:10:57,735 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:57,735 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ZKUtil(162): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,735 WARN [RS:2;jenkins-hbase4:41283] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:57,735 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41283,1690222257196] 2023-07-24 18:10:57,735 INFO [RS:2;jenkins-hbase4:41283] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:57,735 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37455,1690222256867] 2023-07-24 18:10:57,735 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,735 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ZKUtil(162): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,735 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740 2023-07-24 18:10:57,736 WARN [RS:0;jenkins-hbase4:37455] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:10:57,736 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ZKUtil(162): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,736 INFO [RS:0;jenkins-hbase4:37455] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:57,736 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,737 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740 2023-07-24 18:10:57,737 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ZKUtil(162): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,737 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 18:10:57,738 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ZKUtil(162): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,741 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:57,741 INFO [RS:1;jenkins-hbase4:34233] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:57,741 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ZKUtil(162): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,742 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ZKUtil(162): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,743 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ZKUtil(162): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,743 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:57,743 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ZKUtil(162): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,744 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:57,745 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ZKUtil(162): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,745 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ZKUtil(162): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,746 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:57,746 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:10:57,750 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:57,751 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11591388800, jitterRate=0.0795322060585022}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:57,751 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:57,751 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:10:57,751 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:10:57,751 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:10:57,751 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:10:57,751 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:10:57,752 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:10:57,752 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:10:57,753 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:10:57,753 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 18:10:57,753 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 18:10:57,757 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 18:10:57,763 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 18:10:57,817 INFO [RS:0;jenkins-hbase4:37455] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:57,817 INFO [RS:1;jenkins-hbase4:34233] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:57,817 INFO [RS:2;jenkins-hbase4:41283] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:10:57,817 INFO [RS:1;jenkins-hbase4:34233] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:57,817 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,817 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:57,819 INFO [RS:0;jenkins-hbase4:37455] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:57,819 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,819 INFO [RS:0;jenkins-hbase4:37455] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:57,819 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,819 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,819 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,819 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,819 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,819 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:57,819 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,820 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:57,820 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,820 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,820 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,820 DEBUG [RS:1;jenkins-hbase4:34233] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,821 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,821 INFO [RS:2;jenkins-hbase4:41283] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:10:57,822 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,822 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,822 INFO [RS:2;jenkins-hbase4:41283] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:10:57,822 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,822 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,822 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 DEBUG [RS:0;jenkins-hbase4:37455] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,823 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:10:57,825 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,825 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,825 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,825 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,825 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,826 DEBUG [RS:2;jenkins-hbase4:41283] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:10:57,832 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,832 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,832 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,832 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,837 INFO [RS:0;jenkins-hbase4:37455] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:57,837 INFO [RS:1;jenkins-hbase4:34233] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:57,837 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37455,1690222256867-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,837 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34233,1690222257031-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,843 INFO [RS:2;jenkins-hbase4:41283] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:10:57,843 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41283,1690222257196-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,849 INFO [RS:0;jenkins-hbase4:37455] regionserver.Replication(203): jenkins-hbase4.apache.org,37455,1690222256867 started 2023-07-24 18:10:57,849 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37455,1690222256867, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37455, sessionid=0x101988765090001 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37455,1690222256867' 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:57,850 DEBUG [RS:0;jenkins-hbase4:37455] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:57,851 DEBUG [RS:0;jenkins-hbase4:37455] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37455,1690222256867' 2023-07-24 18:10:57,851 DEBUG [RS:0;jenkins-hbase4:37455] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:57,851 DEBUG [RS:0;jenkins-hbase4:37455] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:57,851 DEBUG [RS:0;jenkins-hbase4:37455] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:57,851 INFO [RS:0;jenkins-hbase4:37455] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 18:10:57,854 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,854 INFO [RS:1;jenkins-hbase4:34233] regionserver.Replication(203): jenkins-hbase4.apache.org,34233,1690222257031 started 2023-07-24 18:10:57,854 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34233,1690222257031, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34233, sessionid=0x101988765090002 2023-07-24 18:10:57,854 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:57,854 DEBUG [RS:1;jenkins-hbase4:34233] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,854 DEBUG [RS:1;jenkins-hbase4:34233] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34233,1690222257031' 2023-07-24 18:10:57,854 DEBUG [RS:1;jenkins-hbase4:34233] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:57,854 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ZKUtil(398): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 18:10:57,854 INFO [RS:0;jenkins-hbase4:37455] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 18:10:57,854 DEBUG [RS:1;jenkins-hbase4:34233] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:57,855 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:57,855 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:57,855 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,855 DEBUG [RS:1;jenkins-hbase4:34233] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,855 DEBUG [RS:1;jenkins-hbase4:34233] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34233,1690222257031' 2023-07-24 18:10:57,855 DEBUG [RS:1;jenkins-hbase4:34233] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:57,855 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,855 DEBUG [RS:1;jenkins-hbase4:34233] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:57,856 DEBUG [RS:1;jenkins-hbase4:34233] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:57,856 INFO [RS:1;jenkins-hbase4:34233] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 18:10:57,856 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,856 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ZKUtil(398): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 18:10:57,856 INFO [RS:1;jenkins-hbase4:34233] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 18:10:57,856 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,857 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,858 INFO [RS:2;jenkins-hbase4:41283] regionserver.Replication(203): jenkins-hbase4.apache.org,41283,1690222257196 started 2023-07-24 18:10:57,858 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41283,1690222257196, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41283, sessionid=0x101988765090003 2023-07-24 18:10:57,858 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:10:57,858 DEBUG [RS:2;jenkins-hbase4:41283] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,858 DEBUG [RS:2;jenkins-hbase4:41283] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41283,1690222257196' 2023-07-24 18:10:57,858 DEBUG [RS:2;jenkins-hbase4:41283] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:10:57,858 DEBUG [RS:2;jenkins-hbase4:41283] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41283,1690222257196' 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:10:57,859 DEBUG [RS:2;jenkins-hbase4:41283] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:10:57,859 INFO [RS:2;jenkins-hbase4:41283] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-24 18:10:57,859 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,860 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ZKUtil(398): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-24 18:10:57,860 INFO [RS:2;jenkins-hbase4:41283] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-24 18:10:57,860 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,860 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:57,913 DEBUG [jenkins-hbase4:40097] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:10:57,914 DEBUG [jenkins-hbase4:40097] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:57,914 DEBUG [jenkins-hbase4:40097] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:57,914 DEBUG [jenkins-hbase4:40097] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:57,914 DEBUG [jenkins-hbase4:40097] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:57,914 DEBUG [jenkins-hbase4:40097] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:57,915 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34233,1690222257031, state=OPENING 2023-07-24 18:10:57,917 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 18:10:57,918 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:57,921 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:57,921 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34233,1690222257031}] 2023-07-24 18:10:57,958 WARN [ReadOnlyZKClient-127.0.0.1:57771@0x5022eed8] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 18:10:57,958 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:57,959 INFO [RS:0;jenkins-hbase4:37455] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37455%2C1690222256867, suffix=, logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,37455,1690222256867, archiveDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs, maxLogs=32 2023-07-24 18:10:57,959 INFO [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34233%2C1690222257031, suffix=, logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,34233,1690222257031, archiveDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs, maxLogs=32 2023-07-24 18:10:57,960 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:57,961 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34233] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:47622 deadline: 1690222317960, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:57,961 INFO [RS:2;jenkins-hbase4:41283] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41283%2C1690222257196, suffix=, logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,41283,1690222257196, archiveDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs, maxLogs=32 2023-07-24 18:10:57,979 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK] 2023-07-24 18:10:57,979 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK] 2023-07-24 18:10:57,979 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK] 2023-07-24 18:10:57,984 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK] 2023-07-24 18:10:57,984 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK] 2023-07-24 18:10:57,984 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK] 2023-07-24 18:10:57,988 INFO [RS:0;jenkins-hbase4:37455] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,37455,1690222256867/jenkins-hbase4.apache.org%2C37455%2C1690222256867.1690222257961 2023-07-24 18:10:57,988 INFO [RS:2;jenkins-hbase4:41283] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,41283,1690222257196/jenkins-hbase4.apache.org%2C41283%2C1690222257196.1690222257962 2023-07-24 18:10:57,988 DEBUG [RS:0;jenkins-hbase4:37455] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK], DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK]] 2023-07-24 18:10:57,989 DEBUG [RS:2;jenkins-hbase4:41283] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK], DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK]] 2023-07-24 18:10:57,991 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK] 2023-07-24 18:10:57,991 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK] 2023-07-24 18:10:57,993 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK] 2023-07-24 18:10:57,996 INFO [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,34233,1690222257031/jenkins-hbase4.apache.org%2C34233%2C1690222257031.1690222257961 2023-07-24 18:10:57,996 DEBUG [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK], DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK]] 2023-07-24 18:10:58,075 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:58,077 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:58,079 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47628, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:58,086 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:10:58,086 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:10:58,088 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34233%2C1690222257031.meta, suffix=.meta, logDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,34233,1690222257031, archiveDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs, maxLogs=32 2023-07-24 18:10:58,107 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK] 2023-07-24 18:10:58,109 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK] 2023-07-24 18:10:58,114 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK] 2023-07-24 18:10:58,120 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/WALs/jenkins-hbase4.apache.org,34233,1690222257031/jenkins-hbase4.apache.org%2C34233%2C1690222257031.meta.1690222258089.meta 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41817,DS-471732e8-5199-4594-b5ec-f85b7e7953fa,DISK], DatanodeInfoWithStorage[127.0.0.1:46183,DS-2d36ec2d-8176-4beb-8475-460d45cc35d4,DISK], DatanodeInfoWithStorage[127.0.0.1:39177,DS-cebf1dfb-c036-4dcf-9799-93f99d9a8626,DISK]] 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:10:58,121 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:10:58,121 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:10:58,123 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:10:58,124 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/info 2023-07-24 18:10:58,125 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/info 2023-07-24 18:10:58,125 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:10:58,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:58,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:10:58,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:58,128 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:10:58,129 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:10:58,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:58,129 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:10:58,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/table 2023-07-24 18:10:58,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/table 2023-07-24 18:10:58,131 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:10:58,132 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:58,133 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740 2023-07-24 18:10:58,134 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740 2023-07-24 18:10:58,136 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:10:58,140 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:10:58,141 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11475573920, jitterRate=0.06874610483646393}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:10:58,141 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:10:58,142 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690222258075 2023-07-24 18:10:58,146 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:10:58,147 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:10:58,147 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34233,1690222257031, state=OPEN 2023-07-24 18:10:58,149 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:10:58,149 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:10:58,150 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 18:10:58,150 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34233,1690222257031 in 230 msec 2023-07-24 18:10:58,152 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 18:10:58,152 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 397 msec 2023-07-24 18:10:58,155 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 509 msec 2023-07-24 18:10:58,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690222258155, completionTime=-1 2023-07-24 18:10:58,155 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 18:10:58,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 18:10:58,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 18:10:58,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690222318159 2023-07-24 18:10:58,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690222378159 2023-07-24 18:10:58,159 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40097,1690222256671-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40097,1690222256671-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40097,1690222256671-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40097, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 18:10:58,167 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:58,168 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 18:10:58,169 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 18:10:58,170 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:58,171 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:58,173 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,173 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07 empty. 2023-07-24 18:10:58,174 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,174 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 18:10:58,191 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:58,195 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 46dfa8a06c7fdd79272df9289c6bca07, NAME => 'hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp 2023-07-24 18:10:58,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 46dfa8a06c7fdd79272df9289c6bca07, disabling compactions & flushes 2023-07-24 18:10:58,223 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. after waiting 0 ms 2023-07-24 18:10:58,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,223 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,223 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 46dfa8a06c7fdd79272df9289c6bca07: 2023-07-24 18:10:58,226 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:58,227 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222258227"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222258227"}]},"ts":"1690222258227"} 2023-07-24 18:10:58,230 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:58,231 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:58,231 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222258231"}]},"ts":"1690222258231"} 2023-07-24 18:10:58,232 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 18:10:58,235 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:58,236 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:58,236 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:58,236 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:58,236 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:58,236 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=46dfa8a06c7fdd79272df9289c6bca07, ASSIGN}] 2023-07-24 18:10:58,238 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=46dfa8a06c7fdd79272df9289c6bca07, ASSIGN 2023-07-24 18:10:58,239 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=46dfa8a06c7fdd79272df9289c6bca07, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41283,1690222257196; forceNewPlan=false, retain=false 2023-07-24 18:10:58,265 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:58,267 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 18:10:58,269 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:58,270 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:58,272 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,272 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a empty. 2023-07-24 18:10:58,273 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,273 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 18:10:58,299 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:58,301 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => f70dd312b1c13ea7f7e08991ff819e6a, NAME => 'hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp 2023-07-24 18:10:58,328 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,328 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing f70dd312b1c13ea7f7e08991ff819e6a, disabling compactions & flushes 2023-07-24 18:10:58,328 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,328 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,328 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. after waiting 0 ms 2023-07-24 18:10:58,328 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,328 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,328 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for f70dd312b1c13ea7f7e08991ff819e6a: 2023-07-24 18:10:58,330 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:58,331 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222258331"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222258331"}]},"ts":"1690222258331"} 2023-07-24 18:10:58,333 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:58,334 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:58,334 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222258334"}]},"ts":"1690222258334"} 2023-07-24 18:10:58,336 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 18:10:58,340 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:58,340 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:58,340 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:58,340 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:58,340 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:58,341 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f70dd312b1c13ea7f7e08991ff819e6a, ASSIGN}] 2023-07-24 18:10:58,343 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=f70dd312b1c13ea7f7e08991ff819e6a, ASSIGN 2023-07-24 18:10:58,344 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=f70dd312b1c13ea7f7e08991ff819e6a, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34233,1690222257031; forceNewPlan=false, retain=false 2023-07-24 18:10:58,344 INFO [jenkins-hbase4:40097] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 18:10:58,347 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=46dfa8a06c7fdd79272df9289c6bca07, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:58,347 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222258347"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222258347"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222258347"}]},"ts":"1690222258347"} 2023-07-24 18:10:58,348 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f70dd312b1c13ea7f7e08991ff819e6a, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:58,348 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222258348"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222258348"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222258348"}]},"ts":"1690222258348"} 2023-07-24 18:10:58,350 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 46dfa8a06c7fdd79272df9289c6bca07, server=jenkins-hbase4.apache.org,41283,1690222257196}] 2023-07-24 18:10:58,351 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure f70dd312b1c13ea7f7e08991ff819e6a, server=jenkins-hbase4.apache.org,34233,1690222257031}] 2023-07-24 18:10:58,504 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:58,504 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:58,506 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43940, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:58,511 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f70dd312b1c13ea7f7e08991ff819e6a, NAME => 'hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:58,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:10:58,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. service=MultiRowMutationService 2023-07-24 18:10:58,512 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:10:58,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,515 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,515 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 46dfa8a06c7fdd79272df9289c6bca07, NAME => 'hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:58,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,517 INFO [StoreOpener-f70dd312b1c13ea7f7e08991ff819e6a-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,517 INFO [StoreOpener-46dfa8a06c7fdd79272df9289c6bca07-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,519 DEBUG [StoreOpener-f70dd312b1c13ea7f7e08991ff819e6a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/m 2023-07-24 18:10:58,519 DEBUG [StoreOpener-f70dd312b1c13ea7f7e08991ff819e6a-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/m 2023-07-24 18:10:58,519 DEBUG [StoreOpener-46dfa8a06c7fdd79272df9289c6bca07-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/info 2023-07-24 18:10:58,519 DEBUG [StoreOpener-46dfa8a06c7fdd79272df9289c6bca07-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/info 2023-07-24 18:10:58,519 INFO [StoreOpener-f70dd312b1c13ea7f7e08991ff819e6a-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f70dd312b1c13ea7f7e08991ff819e6a columnFamilyName m 2023-07-24 18:10:58,520 INFO [StoreOpener-46dfa8a06c7fdd79272df9289c6bca07-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 46dfa8a06c7fdd79272df9289c6bca07 columnFamilyName info 2023-07-24 18:10:58,520 INFO [StoreOpener-f70dd312b1c13ea7f7e08991ff819e6a-1] regionserver.HStore(310): Store=f70dd312b1c13ea7f7e08991ff819e6a/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:58,520 INFO [StoreOpener-46dfa8a06c7fdd79272df9289c6bca07-1] regionserver.HStore(310): Store=46dfa8a06c7fdd79272df9289c6bca07/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:58,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,522 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,526 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:10:58,527 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:10:58,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:58,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:58,537 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f70dd312b1c13ea7f7e08991ff819e6a; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@300f4653, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:58,537 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f70dd312b1c13ea7f7e08991ff819e6a: 2023-07-24 18:10:58,538 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a., pid=9, masterSystemTime=1690222258504 2023-07-24 18:10:58,541 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,541 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:10:58,542 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=f70dd312b1c13ea7f7e08991ff819e6a, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:10:58,542 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222258541"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222258541"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222258541"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222258541"}]},"ts":"1690222258541"} 2023-07-24 18:10:58,542 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 46dfa8a06c7fdd79272df9289c6bca07; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11669134720, jitterRate=0.0867728590965271}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:58,543 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 46dfa8a06c7fdd79272df9289c6bca07: 2023-07-24 18:10:58,543 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07., pid=8, masterSystemTime=1690222258504 2023-07-24 18:10:58,547 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,548 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:10:58,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 18:10:58,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure f70dd312b1c13ea7f7e08991ff819e6a, server=jenkins-hbase4.apache.org,34233,1690222257031 in 192 msec 2023-07-24 18:10:58,548 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=46dfa8a06c7fdd79272df9289c6bca07, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:58,548 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222258548"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222258548"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222258548"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222258548"}]},"ts":"1690222258548"} 2023-07-24 18:10:58,551 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 18:10:58,551 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=f70dd312b1c13ea7f7e08991ff819e6a, ASSIGN in 207 msec 2023-07-24 18:10:58,552 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:58,553 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222258552"}]},"ts":"1690222258552"} 2023-07-24 18:10:58,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-24 18:10:58,554 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 46dfa8a06c7fdd79272df9289c6bca07, server=jenkins-hbase4.apache.org,41283,1690222257196 in 200 msec 2023-07-24 18:10:58,554 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 18:10:58,557 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 18:10:58,557 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=46dfa8a06c7fdd79272df9289c6bca07, ASSIGN in 318 msec 2023-07-24 18:10:58,557 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:58,558 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:58,559 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222258558"}]},"ts":"1690222258558"} 2023-07-24 18:10:58,560 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 18:10:58,560 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 293 msec 2023-07-24 18:10:58,562 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:58,563 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 395 msec 2023-07-24 18:10:58,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 18:10:58,571 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:58,571 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:58,573 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 18:10:58,573 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 18:10:58,575 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:58,576 INFO [RS-EventLoopGroup-11-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43954, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:58,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 18:10:58,582 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:58,582 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,584 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:58,586 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40097,1690222256671] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 18:10:58,591 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:58,594 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-24 18:10:58,602 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 18:10:58,609 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:58,613 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-24 18:10:58,626 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:10:58,629 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 18:10:58,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.235sec 2023-07-24 18:10:58,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-24 18:10:58,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:58,631 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-24 18:10:58,631 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-24 18:10:58,633 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:58,633 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:58,634 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-24 18:10:58,635 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:58,635 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef empty. 2023-07-24 18:10:58,636 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:58,636 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-24 18:10:58,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-24 18:10:58,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-24 18:10:58,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:10:58,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 18:10:58,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 18:10:58,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40097,1690222256671-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 18:10:58,641 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40097,1690222256671-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 18:10:58,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 18:10:58,683 DEBUG [Listener at localhost/42673] zookeeper.ReadOnlyZKClient(139): Connect 0x24130d34 to 127.0.0.1:57771 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:58,693 DEBUG [Listener at localhost/42673] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53934634, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:58,694 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:58,695 DEBUG [hconnection-0x408dab3e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:58,696 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => d8df9fe238514618c9b3f963db5679ef, NAME => 'hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp 2023-07-24 18:10:58,698 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47634, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:58,699 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:10:58,699 INFO [Listener at localhost/42673] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:10:58,701 DEBUG [Listener at localhost/42673] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 18:10:58,703 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35734, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 18:10:58,706 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:10:58,706 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:10:58,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 18:10:58,707 DEBUG [Listener at localhost/42673] zookeeper.ReadOnlyZKClient(139): Connect 0x3c9a544d to 127.0.0.1:57771 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:10:58,717 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,717 DEBUG [Listener at localhost/42673] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11e3dcbb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:10:58,717 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing d8df9fe238514618c9b3f963db5679ef, disabling compactions & flushes 2023-07-24 18:10:58,717 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:58,717 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:58,717 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. after waiting 0 ms 2023-07-24 18:10:58,717 INFO [Listener at localhost/42673] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:57771 2023-07-24 18:10:58,717 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:58,717 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:58,717 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for d8df9fe238514618c9b3f963db5679ef: 2023-07-24 18:10:58,720 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:10:58,721 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10198876509000a connected 2023-07-24 18:10:58,721 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:58,722 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222258722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222258722"}]},"ts":"1690222258722"} 2023-07-24 18:10:58,723 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:58,724 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:58,724 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222258724"}]},"ts":"1690222258724"} 2023-07-24 18:10:58,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-24 18:10:58,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-24 18:10:58,727 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-24 18:10:58,732 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:58,732 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:58,732 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:58,732 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:58,732 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:58,732 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=d8df9fe238514618c9b3f963db5679ef, ASSIGN}] 2023-07-24 18:10:58,733 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=d8df9fe238514618c9b3f963db5679ef, ASSIGN 2023-07-24 18:10:58,734 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=d8df9fe238514618c9b3f963db5679ef, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37455,1690222256867; forceNewPlan=false, retain=false 2023-07-24 18:10:58,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-24 18:10:58,737 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:58,740 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 14 msec 2023-07-24 18:10:58,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-24 18:10:58,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:58,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-24 18:10:58,844 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:58,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-24 18:10:58,845 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 18:10:58,846 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:58,846 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:58,848 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:10:58,850 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:58,850 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 empty. 2023-07-24 18:10:58,851 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:58,851 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 18:10:58,866 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-24 18:10:58,867 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 93145bb8e4cda600bece9c19c9b844c6, NAME => 'np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp 2023-07-24 18:10:58,876 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:58,876 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 93145bb8e4cda600bece9c19c9b844c6, disabling compactions & flushes 2023-07-24 18:10:58,876 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:58,876 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:58,876 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. after waiting 0 ms 2023-07-24 18:10:58,876 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:58,876 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:58,876 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 93145bb8e4cda600bece9c19c9b844c6: 2023-07-24 18:10:58,878 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:10:58,879 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222258879"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222258879"}]},"ts":"1690222258879"} 2023-07-24 18:10:58,880 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:10:58,881 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:10:58,881 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222258881"}]},"ts":"1690222258881"} 2023-07-24 18:10:58,882 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-24 18:10:58,884 INFO [jenkins-hbase4:40097] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:58,885 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=d8df9fe238514618c9b3f963db5679ef, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:58,885 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222258885"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222258885"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222258885"}]},"ts":"1690222258885"} 2023-07-24 18:10:58,887 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=14, state=RUNNABLE; OpenRegionProcedure d8df9fe238514618c9b3f963db5679ef, server=jenkins-hbase4.apache.org,37455,1690222256867}] 2023-07-24 18:10:58,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:10:58,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:10:58,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:10:58,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:10:58,887 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:10:58,887 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, ASSIGN}] 2023-07-24 18:10:58,888 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, ASSIGN 2023-07-24 18:10:58,889 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41283,1690222257196; forceNewPlan=false, retain=false 2023-07-24 18:10:58,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 18:10:59,039 INFO [jenkins-hbase4:40097] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:10:59,039 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:59,040 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=93145bb8e4cda600bece9c19c9b844c6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:59,040 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:10:59,040 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222259040"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222259040"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222259040"}]},"ts":"1690222259040"} 2023-07-24 18:10:59,042 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 93145bb8e4cda600bece9c19c9b844c6, server=jenkins-hbase4.apache.org,41283,1690222257196}] 2023-07-24 18:10:59,042 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34066, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:10:59,046 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:59,046 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d8df9fe238514618c9b3f963db5679ef, NAME => 'hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:59,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:59,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,048 INFO [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,049 DEBUG [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef/q 2023-07-24 18:10:59,050 DEBUG [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef/q 2023-07-24 18:10:59,050 INFO [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8df9fe238514618c9b3f963db5679ef columnFamilyName q 2023-07-24 18:10:59,051 INFO [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] regionserver.HStore(310): Store=d8df9fe238514618c9b3f963db5679ef/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:59,051 INFO [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,052 DEBUG [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef/u 2023-07-24 18:10:59,052 DEBUG [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef/u 2023-07-24 18:10:59,052 INFO [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d8df9fe238514618c9b3f963db5679ef columnFamilyName u 2023-07-24 18:10:59,053 INFO [StoreOpener-d8df9fe238514618c9b3f963db5679ef-1] regionserver.HStore(310): Store=d8df9fe238514618c9b3f963db5679ef/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:59,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,056 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-24 18:10:59,057 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:10:59,059 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:59,060 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d8df9fe238514618c9b3f963db5679ef; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11999610880, jitterRate=0.11755084991455078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-24 18:10:59,060 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d8df9fe238514618c9b3f963db5679ef: 2023-07-24 18:10:59,060 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef., pid=16, masterSystemTime=1690222259039 2023-07-24 18:10:59,063 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:59,064 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:10:59,064 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=d8df9fe238514618c9b3f963db5679ef, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:10:59,064 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690222259064"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222259064"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222259064"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222259064"}]},"ts":"1690222259064"} 2023-07-24 18:10:59,067 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=14 2023-07-24 18:10:59,067 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=14, state=SUCCESS; OpenRegionProcedure d8df9fe238514618c9b3f963db5679ef, server=jenkins-hbase4.apache.org,37455,1690222256867 in 178 msec 2023-07-24 18:10:59,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-24 18:10:59,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=d8df9fe238514618c9b3f963db5679ef, ASSIGN in 335 msec 2023-07-24 18:10:59,069 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:59,069 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222259069"}]},"ts":"1690222259069"} 2023-07-24 18:10:59,070 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-24 18:10:59,072 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:59,073 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 442 msec 2023-07-24 18:10:59,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 18:10:59,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 93145bb8e4cda600bece9c19c9b844c6, NAME => 'np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:10:59,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:10:59,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,200 INFO [StoreOpener-93145bb8e4cda600bece9c19c9b844c6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,201 DEBUG [StoreOpener-93145bb8e4cda600bece9c19c9b844c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/fam1 2023-07-24 18:10:59,201 DEBUG [StoreOpener-93145bb8e4cda600bece9c19c9b844c6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/fam1 2023-07-24 18:10:59,201 INFO [StoreOpener-93145bb8e4cda600bece9c19c9b844c6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 93145bb8e4cda600bece9c19c9b844c6 columnFamilyName fam1 2023-07-24 18:10:59,202 INFO [StoreOpener-93145bb8e4cda600bece9c19c9b844c6-1] regionserver.HStore(310): Store=93145bb8e4cda600bece9c19c9b844c6/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:10:59,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,207 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:10:59,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 93145bb8e4cda600bece9c19c9b844c6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10707457440, jitterRate=-0.002790316939353943}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:10:59,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 93145bb8e4cda600bece9c19c9b844c6: 2023-07-24 18:10:59,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6., pid=18, masterSystemTime=1690222259194 2023-07-24 18:10:59,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,210 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=93145bb8e4cda600bece9c19c9b844c6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:59,210 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222259210"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222259210"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222259210"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222259210"}]},"ts":"1690222259210"} 2023-07-24 18:10:59,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 18:10:59,213 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 93145bb8e4cda600bece9c19c9b844c6, server=jenkins-hbase4.apache.org,41283,1690222257196 in 169 msec 2023-07-24 18:10:59,214 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-24 18:10:59,214 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, ASSIGN in 326 msec 2023-07-24 18:10:59,214 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:10:59,215 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222259214"}]},"ts":"1690222259214"} 2023-07-24 18:10:59,216 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-24 18:10:59,218 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:10:59,219 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 377 msec 2023-07-24 18:10:59,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-24 18:10:59,448 INFO [Listener at localhost/42673] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-24 18:10:59,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:10:59,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-24 18:10:59,452 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:10:59,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-24 18:10:59,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 18:10:59,470 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:10:59,472 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:10:59,474 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-24 18:10:59,556 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 18:10:59,558 INFO [Listener at localhost/42673] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-24 18:10:59,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:10:59,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:10:59,560 INFO [Listener at localhost/42673] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-24 18:10:59,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-24 18:10:59,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-24 18:10:59,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:59,565 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222259564"}]},"ts":"1690222259564"} 2023-07-24 18:10:59,566 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-24 18:10:59,568 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-24 18:10:59,569 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, UNASSIGN}] 2023-07-24 18:10:59,569 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, UNASSIGN 2023-07-24 18:10:59,570 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=93145bb8e4cda600bece9c19c9b844c6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:10:59,570 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222259570"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222259570"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222259570"}]},"ts":"1690222259570"} 2023-07-24 18:10:59,571 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 93145bb8e4cda600bece9c19c9b844c6, server=jenkins-hbase4.apache.org,41283,1690222257196}] 2023-07-24 18:10:59,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:59,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 93145bb8e4cda600bece9c19c9b844c6, disabling compactions & flushes 2023-07-24 18:10:59,725 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. after waiting 0 ms 2023-07-24 18:10:59,726 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,729 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:10:59,730 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6. 2023-07-24 18:10:59,730 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 93145bb8e4cda600bece9c19c9b844c6: 2023-07-24 18:10:59,731 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,732 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=93145bb8e4cda600bece9c19c9b844c6, regionState=CLOSED 2023-07-24 18:10:59,732 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222259732"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222259732"}]},"ts":"1690222259732"} 2023-07-24 18:10:59,734 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-24 18:10:59,734 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 93145bb8e4cda600bece9c19c9b844c6, server=jenkins-hbase4.apache.org,41283,1690222257196 in 162 msec 2023-07-24 18:10:59,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-24 18:10:59,735 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=93145bb8e4cda600bece9c19c9b844c6, UNASSIGN in 165 msec 2023-07-24 18:10:59,736 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222259736"}]},"ts":"1690222259736"} 2023-07-24 18:10:59,737 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-24 18:10:59,739 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-24 18:10:59,740 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 178 msec 2023-07-24 18:10:59,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:10:59,867 INFO [Listener at localhost/42673] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-24 18:10:59,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-24 18:10:59,868 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-24 18:10:59,870 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 18:10:59,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-24 18:10:59,872 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 18:10:59,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:10:59,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:10:59,875 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 18:10:59,877 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/fam1, FileablePath, hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/recovered.edits] 2023-07-24 18:10:59,882 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/recovered.edits/4.seqid to hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/archive/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6/recovered.edits/4.seqid 2023-07-24 18:10:59,883 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/.tmp/data/np1/table1/93145bb8e4cda600bece9c19c9b844c6 2023-07-24 18:10:59,883 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-24 18:10:59,885 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 18:10:59,886 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-24 18:10:59,888 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-24 18:10:59,889 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 18:10:59,889 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-24 18:10:59,889 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222259889"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:59,891 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:10:59,891 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 93145bb8e4cda600bece9c19c9b844c6, NAME => 'np1:table1,,1690222258840.93145bb8e4cda600bece9c19c9b844c6.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:10:59,891 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-24 18:10:59,891 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222259891"}]},"ts":"9223372036854775807"} 2023-07-24 18:10:59,892 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-24 18:10:59,894 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-24 18:10:59,895 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-24 18:10:59,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-24 18:10:59,978 INFO [Listener at localhost/42673] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-24 18:10:59,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-24 18:10:59,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-24 18:10:59,991 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 18:10:59,994 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 18:10:59,996 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 18:10:59,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 18:10:59,999 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-24 18:10:59,999 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:10:59,999 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 18:11:00,001 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-24 18:11:00,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 18 msec 2023-07-24 18:11:00,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40097] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-24 18:11:00,098 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 18:11:00,098 INFO [Listener at localhost/42673] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 18:11:00,098 DEBUG [Listener at localhost/42673] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x24130d34 to 127.0.0.1:57771 2023-07-24 18:11:00,098 DEBUG [Listener at localhost/42673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,098 DEBUG [Listener at localhost/42673] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 18:11:00,099 DEBUG [Listener at localhost/42673] util.JVMClusterUtil(257): Found active master hash=1722846185, stopped=false 2023-07-24 18:11:00,099 DEBUG [Listener at localhost/42673] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:11:00,099 DEBUG [Listener at localhost/42673] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:11:00,099 DEBUG [Listener at localhost/42673] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-24 18:11:00,099 INFO [Listener at localhost/42673] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:11:00,100 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:00,100 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:00,100 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:00,100 INFO [Listener at localhost/42673] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 18:11:00,101 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:00,100 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:00,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:00,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:00,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:00,103 DEBUG [Listener at localhost/42673] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5022eed8 to 127.0.0.1:57771 2023-07-24 18:11:00,104 DEBUG [Listener at localhost/42673] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,104 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:00,104 INFO [Listener at localhost/42673] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37455,1690222256867' ***** 2023-07-24 18:11:00,104 INFO [Listener at localhost/42673] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:00,104 INFO [Listener at localhost/42673] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34233,1690222257031' ***** 2023-07-24 18:11:00,104 INFO [Listener at localhost/42673] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:00,104 INFO [Listener at localhost/42673] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41283,1690222257196' ***** 2023-07-24 18:11:00,104 INFO [Listener at localhost/42673] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:00,104 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:00,104 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:00,104 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:00,114 INFO [RS:0;jenkins-hbase4:37455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@536de3b9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:00,114 INFO [RS:1;jenkins-hbase4:34233] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6f864f73{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:00,114 INFO [RS:2;jenkins-hbase4:41283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2b7448e1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:00,115 INFO [RS:0;jenkins-hbase4:37455] server.AbstractConnector(383): Stopped ServerConnector@11eb531b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:00,115 INFO [RS:0;jenkins-hbase4:37455] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:00,115 INFO [RS:1;jenkins-hbase4:34233] server.AbstractConnector(383): Stopped ServerConnector@78016663{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:00,115 INFO [RS:2;jenkins-hbase4:41283] server.AbstractConnector(383): Stopped ServerConnector@73eda5e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:00,116 INFO [RS:0;jenkins-hbase4:37455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@16c56918{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:00,116 INFO [RS:1;jenkins-hbase4:34233] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:00,116 INFO [RS:2;jenkins-hbase4:41283] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:00,118 INFO [RS:0;jenkins-hbase4:37455] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2d8a119b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:00,118 INFO [RS:2;jenkins-hbase4:41283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@a8e6c2f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:00,118 INFO [RS:1;jenkins-hbase4:34233] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55941c90{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:00,118 INFO [RS:1;jenkins-hbase4:34233] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3de75ac7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:00,118 INFO [RS:2;jenkins-hbase4:41283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fb867a7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:00,118 INFO [RS:2;jenkins-hbase4:41283] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:00,119 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:00,119 INFO [RS:1;jenkins-hbase4:34233] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:00,119 INFO [RS:2;jenkins-hbase4:41283] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:00,120 INFO [RS:1;jenkins-hbase4:34233] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:00,120 INFO [RS:2;jenkins-hbase4:41283] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:00,120 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:00,120 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(3305): Received CLOSE for 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:11:00,121 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:11:00,121 DEBUG [RS:2;jenkins-hbase4:41283] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1edb45f0 to 127.0.0.1:57771 2023-07-24 18:11:00,120 INFO [RS:1;jenkins-hbase4:34233] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:00,121 DEBUG [RS:2;jenkins-hbase4:41283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,121 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(3305): Received CLOSE for f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:11:00,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 46dfa8a06c7fdd79272df9289c6bca07, disabling compactions & flushes 2023-07-24 18:11:00,120 INFO [RS:0;jenkins-hbase4:37455] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:00,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:11:00,122 INFO [RS:0;jenkins-hbase4:37455] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:00,122 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:11:00,121 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:11:00,122 DEBUG [RS:1;jenkins-hbase4:34233] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x39b73e37 to 127.0.0.1:57771 2023-07-24 18:11:00,122 INFO [RS:0;jenkins-hbase4:37455] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:00,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f70dd312b1c13ea7f7e08991ff819e6a, disabling compactions & flushes 2023-07-24 18:11:00,123 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(3305): Received CLOSE for d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:11:00,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:11:00,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. after waiting 0 ms 2023-07-24 18:11:00,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:11:00,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 46dfa8a06c7fdd79272df9289c6bca07 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-24 18:11:00,122 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:00,123 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:11:00,126 DEBUG [RS:0;jenkins-hbase4:37455] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a1d53dc to 127.0.0.1:57771 2023-07-24 18:11:00,126 DEBUG [RS:0;jenkins-hbase4:37455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:11:00,122 DEBUG [RS:1;jenkins-hbase4:34233] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,122 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1478): Online Regions={46dfa8a06c7fdd79272df9289c6bca07=hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07.} 2023-07-24 18:11:00,126 INFO [RS:1;jenkins-hbase4:34233] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:00,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:11:00,126 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-24 18:11:00,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d8df9fe238514618c9b3f963db5679ef, disabling compactions & flushes 2023-07-24 18:11:00,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:11:00,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:11:00,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. after waiting 0 ms 2023-07-24 18:11:00,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:11:00,127 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:00,126 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1478): Online Regions={d8df9fe238514618c9b3f963db5679ef=hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef.} 2023-07-24 18:11:00,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. after waiting 0 ms 2023-07-24 18:11:00,131 DEBUG [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1504): Waiting on d8df9fe238514618c9b3f963db5679ef 2023-07-24 18:11:00,126 INFO [RS:1;jenkins-hbase4:34233] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:00,132 INFO [RS:1;jenkins-hbase4:34233] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:00,132 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 18:11:00,126 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1504): Waiting on 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:11:00,131 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:11:00,127 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:00,132 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:11:00,133 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:11:00,133 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:11:00,133 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:11:00,133 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:11:00,132 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-24 18:11:00,133 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-24 18:11:00,133 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing f70dd312b1c13ea7f7e08991ff819e6a 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-24 18:11:00,133 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1478): Online Regions={f70dd312b1c13ea7f7e08991ff819e6a=hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a., 1588230740=hbase:meta,,1.1588230740} 2023-07-24 18:11:00,133 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1504): Waiting on 1588230740, f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:11:00,137 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:00,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/quota/d8df9fe238514618c9b3f963db5679ef/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:11:00,138 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:11:00,138 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d8df9fe238514618c9b3f963db5679ef: 2023-07-24 18:11:00,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690222258630.d8df9fe238514618c9b3f963db5679ef. 2023-07-24 18:11:00,168 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/.tmp/info/02ab3e975e6b4199b3803cdf90ea95ad 2023-07-24 18:11:00,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 02ab3e975e6b4199b3803cdf90ea95ad 2023-07-24 18:11:00,194 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/.tmp/rep_barrier/ff249df5d652419a99079137f02ff6da 2023-07-24 18:11:00,199 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ff249df5d652419a99079137f02ff6da 2023-07-24 18:11:00,218 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/.tmp/table/1c1abc23ff1842fba9dd10ec99970968 2023-07-24 18:11:00,223 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1c1abc23ff1842fba9dd10ec99970968 2023-07-24 18:11:00,224 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/.tmp/info/02ab3e975e6b4199b3803cdf90ea95ad as hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/info/02ab3e975e6b4199b3803cdf90ea95ad 2023-07-24 18:11:00,229 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 02ab3e975e6b4199b3803cdf90ea95ad 2023-07-24 18:11:00,230 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/info/02ab3e975e6b4199b3803cdf90ea95ad, entries=32, sequenceid=31, filesize=8.5 K 2023-07-24 18:11:00,230 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/.tmp/rep_barrier/ff249df5d652419a99079137f02ff6da as hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/rep_barrier/ff249df5d652419a99079137f02ff6da 2023-07-24 18:11:00,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ff249df5d652419a99079137f02ff6da 2023-07-24 18:11:00,237 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/rep_barrier/ff249df5d652419a99079137f02ff6da, entries=1, sequenceid=31, filesize=4.9 K 2023-07-24 18:11:00,238 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/.tmp/table/1c1abc23ff1842fba9dd10ec99970968 as hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/table/1c1abc23ff1842fba9dd10ec99970968 2023-07-24 18:11:00,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1c1abc23ff1842fba9dd10ec99970968 2023-07-24 18:11:00,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/table/1c1abc23ff1842fba9dd10ec99970968, entries=8, sequenceid=31, filesize=5.2 K 2023-07-24 18:11:00,246 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 113ms, sequenceid=31, compaction requested=false 2023-07-24 18:11:00,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-24 18:11:00,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:00,254 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:00,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:11:00,254 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:00,331 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37455,1690222256867; all regions closed. 2023-07-24 18:11:00,331 DEBUG [RS:0;jenkins-hbase4:37455] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 18:11:00,332 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1504): Waiting on 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:11:00,334 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1504): Waiting on f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:11:00,339 DEBUG [RS:0;jenkins-hbase4:37455] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs 2023-07-24 18:11:00,339 INFO [RS:0;jenkins-hbase4:37455] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37455%2C1690222256867:(num 1690222257961) 2023-07-24 18:11:00,339 DEBUG [RS:0;jenkins-hbase4:37455] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,339 INFO [RS:0;jenkins-hbase4:37455] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:00,340 INFO [RS:0;jenkins-hbase4:37455] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:00,340 INFO [RS:0;jenkins-hbase4:37455] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:00,340 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:00,340 INFO [RS:0;jenkins-hbase4:37455] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:00,340 INFO [RS:0;jenkins-hbase4:37455] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:00,341 INFO [RS:0;jenkins-hbase4:37455] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37455 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37455,1690222256867 2023-07-24 18:11:00,346 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:00,347 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37455,1690222256867] 2023-07-24 18:11:00,347 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37455,1690222256867; numProcessing=1 2023-07-24 18:11:00,349 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37455,1690222256867 already deleted, retry=false 2023-07-24 18:11:00,350 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37455,1690222256867 expired; onlineServers=2 2023-07-24 18:11:00,532 DEBUG [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1504): Waiting on 46dfa8a06c7fdd79272df9289c6bca07 2023-07-24 18:11:00,534 DEBUG [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1504): Waiting on f70dd312b1c13ea7f7e08991ff819e6a 2023-07-24 18:11:00,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/.tmp/info/e87a9dbf16e64c65afb02415b9bb6221 2023-07-24 18:11:00,566 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/.tmp/m/d85c9337f8114d99a75cc8cc0028f2a6 2023-07-24 18:11:00,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e87a9dbf16e64c65afb02415b9bb6221 2023-07-24 18:11:00,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/.tmp/info/e87a9dbf16e64c65afb02415b9bb6221 as hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/info/e87a9dbf16e64c65afb02415b9bb6221 2023-07-24 18:11:00,574 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/.tmp/m/d85c9337f8114d99a75cc8cc0028f2a6 as hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/m/d85c9337f8114d99a75cc8cc0028f2a6 2023-07-24 18:11:00,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e87a9dbf16e64c65afb02415b9bb6221 2023-07-24 18:11:00,577 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/info/e87a9dbf16e64c65afb02415b9bb6221, entries=3, sequenceid=8, filesize=5.0 K 2023-07-24 18:11:00,578 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 46dfa8a06c7fdd79272df9289c6bca07 in 455ms, sequenceid=8, compaction requested=false 2023-07-24 18:11:00,578 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-24 18:11:00,579 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/m/d85c9337f8114d99a75cc8cc0028f2a6, entries=1, sequenceid=7, filesize=4.9 K 2023-07-24 18:11:00,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for f70dd312b1c13ea7f7e08991ff819e6a in 450ms, sequenceid=7, compaction requested=false 2023-07-24 18:11:00,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-24 18:11:00,589 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/namespace/46dfa8a06c7fdd79272df9289c6bca07/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-24 18:11:00,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/data/hbase/rsgroup/f70dd312b1c13ea7f7e08991ff819e6a/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-24 18:11:00,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:11:00,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 46dfa8a06c7fdd79272df9289c6bca07: 2023-07-24 18:11:00,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690222258167.46dfa8a06c7fdd79272df9289c6bca07. 2023-07-24 18:11:00,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:00,590 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:11:00,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f70dd312b1c13ea7f7e08991ff819e6a: 2023-07-24 18:11:00,590 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690222258265.f70dd312b1c13ea7f7e08991ff819e6a. 2023-07-24 18:11:00,601 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:00,601 INFO [RS:0;jenkins-hbase4:37455] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37455,1690222256867; zookeeper connection closed. 2023-07-24 18:11:00,601 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:37455-0x101988765090001, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:00,601 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7a6e205] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7a6e205 2023-07-24 18:11:00,733 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41283,1690222257196; all regions closed. 2023-07-24 18:11:00,733 DEBUG [RS:2;jenkins-hbase4:41283] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 18:11:00,735 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34233,1690222257031; all regions closed. 2023-07-24 18:11:00,736 DEBUG [RS:1;jenkins-hbase4:34233] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-24 18:11:00,742 DEBUG [RS:2;jenkins-hbase4:41283] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs 2023-07-24 18:11:00,742 INFO [RS:2;jenkins-hbase4:41283] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41283%2C1690222257196:(num 1690222257962) 2023-07-24 18:11:00,742 DEBUG [RS:2;jenkins-hbase4:41283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,742 INFO [RS:2;jenkins-hbase4:41283] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:00,744 INFO [RS:2;jenkins-hbase4:41283] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:00,744 INFO [RS:2;jenkins-hbase4:41283] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:00,744 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:00,744 INFO [RS:2;jenkins-hbase4:41283] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:00,744 INFO [RS:2;jenkins-hbase4:41283] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:00,745 INFO [RS:2;jenkins-hbase4:41283] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41283 2023-07-24 18:11:00,748 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:11:00,748 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:00,748 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41283,1690222257196 2023-07-24 18:11:00,749 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41283,1690222257196] 2023-07-24 18:11:00,749 DEBUG [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs 2023-07-24 18:11:00,749 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41283,1690222257196; numProcessing=2 2023-07-24 18:11:00,749 INFO [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34233%2C1690222257031.meta:.meta(num 1690222258089) 2023-07-24 18:11:00,754 DEBUG [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/oldWALs 2023-07-24 18:11:00,754 INFO [RS:1;jenkins-hbase4:34233] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34233%2C1690222257031:(num 1690222257961) 2023-07-24 18:11:00,754 DEBUG [RS:1;jenkins-hbase4:34233] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,754 INFO [RS:1;jenkins-hbase4:34233] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:00,754 INFO [RS:1;jenkins-hbase4:34233] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:00,754 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:00,755 INFO [RS:1;jenkins-hbase4:34233] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34233 2023-07-24 18:11:00,755 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41283,1690222257196 already deleted, retry=false 2023-07-24 18:11:00,755 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41283,1690222257196 expired; onlineServers=1 2023-07-24 18:11:00,758 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:00,758 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34233,1690222257031 2023-07-24 18:11:00,759 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34233,1690222257031] 2023-07-24 18:11:00,759 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34233,1690222257031; numProcessing=3 2023-07-24 18:11:00,760 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34233,1690222257031 already deleted, retry=false 2023-07-24 18:11:00,761 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34233,1690222257031 expired; onlineServers=0 2023-07-24 18:11:00,761 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40097,1690222256671' ***** 2023-07-24 18:11:00,761 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 18:11:00,761 DEBUG [M:0;jenkins-hbase4:40097] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@628f03e7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:00,761 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:00,763 INFO [M:0;jenkins-hbase4:40097] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7ebbee1c{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:00,763 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:00,763 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:00,764 INFO [M:0;jenkins-hbase4:40097] server.AbstractConnector(383): Stopped ServerConnector@198b5911{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:00,764 INFO [M:0;jenkins-hbase4:40097] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:00,764 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:00,764 INFO [M:0;jenkins-hbase4:40097] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4f11c631{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:00,764 INFO [M:0;jenkins-hbase4:40097] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f546421{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:00,764 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40097,1690222256671 2023-07-24 18:11:00,764 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40097,1690222256671; all regions closed. 2023-07-24 18:11:00,764 DEBUG [M:0;jenkins-hbase4:40097] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:00,765 INFO [M:0;jenkins-hbase4:40097] master.HMaster(1491): Stopping master jetty server 2023-07-24 18:11:00,765 INFO [M:0;jenkins-hbase4:40097] server.AbstractConnector(383): Stopped ServerConnector@255b4a1d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:00,766 DEBUG [M:0;jenkins-hbase4:40097] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 18:11:00,766 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 18:11:00,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222257674] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222257674,5,FailOnTimeoutGroup] 2023-07-24 18:11:00,766 DEBUG [M:0;jenkins-hbase4:40097] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 18:11:00,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222257673] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222257673,5,FailOnTimeoutGroup] 2023-07-24 18:11:00,767 INFO [M:0;jenkins-hbase4:40097] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 18:11:00,767 INFO [M:0;jenkins-hbase4:40097] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 18:11:00,767 INFO [M:0;jenkins-hbase4:40097] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:00,767 DEBUG [M:0;jenkins-hbase4:40097] master.HMaster(1512): Stopping service threads 2023-07-24 18:11:00,767 INFO [M:0;jenkins-hbase4:40097] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 18:11:00,768 ERROR [M:0;jenkins-hbase4:40097] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 18:11:00,768 INFO [M:0;jenkins-hbase4:40097] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 18:11:00,768 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 18:11:00,768 DEBUG [M:0;jenkins-hbase4:40097] zookeeper.ZKUtil(398): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 18:11:00,769 WARN [M:0;jenkins-hbase4:40097] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 18:11:00,769 INFO [M:0;jenkins-hbase4:40097] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 18:11:00,769 INFO [M:0;jenkins-hbase4:40097] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 18:11:00,769 DEBUG [M:0;jenkins-hbase4:40097] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:11:00,769 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:00,769 DEBUG [M:0;jenkins-hbase4:40097] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:00,769 DEBUG [M:0;jenkins-hbase4:40097] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:11:00,769 DEBUG [M:0;jenkins-hbase4:40097] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:00,770 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.99 KB heapSize=109.13 KB 2023-07-24 18:11:00,789 INFO [M:0;jenkins-hbase4:40097] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.99 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/96f94219f4194b40a98bde12fd4f4701 2023-07-24 18:11:00,795 DEBUG [M:0;jenkins-hbase4:40097] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/96f94219f4194b40a98bde12fd4f4701 as hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/96f94219f4194b40a98bde12fd4f4701 2023-07-24 18:11:00,800 INFO [M:0;jenkins-hbase4:40097] regionserver.HStore(1080): Added hdfs://localhost:40065/user/jenkins/test-data/a894f154-3cac-5aa0-dcc1-602e85859526/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/96f94219f4194b40a98bde12fd4f4701, entries=24, sequenceid=194, filesize=12.4 K 2023-07-24 18:11:00,801 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegion(2948): Finished flush of dataSize ~92.99 KB/95219, heapSize ~109.11 KB/111728, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=194, compaction requested=false 2023-07-24 18:11:00,803 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:00,803 DEBUG [M:0;jenkins-hbase4:40097] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:00,806 INFO [M:0;jenkins-hbase4:40097] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 18:11:00,806 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:00,807 INFO [M:0;jenkins-hbase4:40097] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40097 2023-07-24 18:11:00,808 DEBUG [M:0;jenkins-hbase4:40097] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40097,1690222256671 already deleted, retry=false 2023-07-24 18:11:00,850 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:00,850 INFO [RS:2;jenkins-hbase4:41283] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41283,1690222257196; zookeeper connection closed. 2023-07-24 18:11:00,850 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:41283-0x101988765090003, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:00,852 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@3201bbb4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@3201bbb4 2023-07-24 18:11:00,950 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:00,951 INFO [M:0;jenkins-hbase4:40097] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40097,1690222256671; zookeeper connection closed. 2023-07-24 18:11:00,951 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): master:40097-0x101988765090000, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:01,051 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:01,051 INFO [RS:1;jenkins-hbase4:34233] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34233,1690222257031; zookeeper connection closed. 2023-07-24 18:11:01,051 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): regionserver:34233-0x101988765090002, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:01,051 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7f887cf1] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7f887cf1 2023-07-24 18:11:01,051 INFO [Listener at localhost/42673] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-24 18:11:01,052 WARN [Listener at localhost/42673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:01,056 INFO [Listener at localhost/42673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:01,161 WARN [BP-927584753-172.31.14.131-1690222255799 heartbeating to localhost/127.0.0.1:40065] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:01,162 WARN [BP-927584753-172.31.14.131-1690222255799 heartbeating to localhost/127.0.0.1:40065] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-927584753-172.31.14.131-1690222255799 (Datanode Uuid 39cd6a77-5cd2-4047-9606-487508b6b975) service to localhost/127.0.0.1:40065 2023-07-24 18:11:01,162 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/dfs/data/data5/current/BP-927584753-172.31.14.131-1690222255799] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:01,163 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/dfs/data/data6/current/BP-927584753-172.31.14.131-1690222255799] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:01,165 WARN [Listener at localhost/42673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:01,168 INFO [Listener at localhost/42673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:01,274 WARN [BP-927584753-172.31.14.131-1690222255799 heartbeating to localhost/127.0.0.1:40065] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:01,274 WARN [BP-927584753-172.31.14.131-1690222255799 heartbeating to localhost/127.0.0.1:40065] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-927584753-172.31.14.131-1690222255799 (Datanode Uuid 18450c47-9ac8-4411-b73a-05a92e967192) service to localhost/127.0.0.1:40065 2023-07-24 18:11:01,275 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/dfs/data/data3/current/BP-927584753-172.31.14.131-1690222255799] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:01,275 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/dfs/data/data4/current/BP-927584753-172.31.14.131-1690222255799] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:01,276 WARN [Listener at localhost/42673] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:01,280 INFO [Listener at localhost/42673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:01,385 WARN [BP-927584753-172.31.14.131-1690222255799 heartbeating to localhost/127.0.0.1:40065] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:01,385 WARN [BP-927584753-172.31.14.131-1690222255799 heartbeating to localhost/127.0.0.1:40065] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-927584753-172.31.14.131-1690222255799 (Datanode Uuid ab93f2a9-b4e2-4ada-aabb-29ab6837313a) service to localhost/127.0.0.1:40065 2023-07-24 18:11:01,386 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/dfs/data/data1/current/BP-927584753-172.31.14.131-1690222255799] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:01,386 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/cluster_1fa41fb1-9fb8-0404-929f-9e2c7ede32a9/dfs/data/data2/current/BP-927584753-172.31.14.131-1690222255799] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:01,396 INFO [Listener at localhost/42673] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:01,517 INFO [Listener at localhost/42673] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.log.dir so I do NOT create it in target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/157f8f56-eab5-7a81-7053-a679b1a155cf/hadoop.tmp.dir so I do NOT create it in target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc, deleteOnExit=true 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-24 18:11:01,546 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/test.cache.data in system properties and HBase conf 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.tmp.dir in system properties and HBase conf 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir in system properties and HBase conf 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-24 18:11:01,547 DEBUG [Listener at localhost/42673] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:11:01,547 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/nfs.dump.dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-24 18:11:01,548 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-24 18:11:01,549 INFO [Listener at localhost/42673] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-24 18:11:01,553 WARN [Listener at localhost/42673] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:11:01,553 WARN [Listener at localhost/42673] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:11:01,594 WARN [Listener at localhost/42673] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:11:01,596 INFO [Listener at localhost/42673] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:11:01,601 INFO [Listener at localhost/42673] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/Jetty_localhost_46285_hdfs____.zgyycp/webapp 2023-07-24 18:11:01,614 DEBUG [Listener at localhost/42673-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x10198876509000a, quorum=127.0.0.1:57771, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-24 18:11:01,614 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x10198876509000a, quorum=127.0.0.1:57771, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-24 18:11:01,695 INFO [Listener at localhost/42673] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46285 2023-07-24 18:11:01,699 WARN [Listener at localhost/42673] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-24 18:11:01,699 WARN [Listener at localhost/42673] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-24 18:11:01,739 WARN [Listener at localhost/33823] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:11:01,752 WARN [Listener at localhost/33823] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:11:01,755 WARN [Listener at localhost/33823] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:11:01,756 INFO [Listener at localhost/33823] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:11:01,762 INFO [Listener at localhost/33823] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/Jetty_localhost_34981_datanode____.wfeeg8/webapp 2023-07-24 18:11:01,857 INFO [Listener at localhost/33823] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34981 2023-07-24 18:11:01,872 WARN [Listener at localhost/33497] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:11:01,892 WARN [Listener at localhost/33497] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:11:01,894 WARN [Listener at localhost/33497] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:11:01,895 INFO [Listener at localhost/33497] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:11:01,899 INFO [Listener at localhost/33497] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/Jetty_localhost_42373_datanode____.218y2w/webapp 2023-07-24 18:11:01,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7f545c3eac232507: Processing first storage report for DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1 from datanode efd52962-d7a6-4f4d-9e37-fe140ee138f2 2023-07-24 18:11:01,998 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7f545c3eac232507: from storage DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1 node DatanodeRegistration(127.0.0.1:37471, datanodeUuid=efd52962-d7a6-4f4d-9e37-fe140ee138f2, infoPort=41025, infoSecurePort=0, ipcPort=33497, storageInfo=lv=-57;cid=testClusterID;nsid=29072327;c=1690222261555), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:11:01,998 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7f545c3eac232507: Processing first storage report for DS-8f6522c6-db65-4c78-ad75-6ae9c916d346 from datanode efd52962-d7a6-4f4d-9e37-fe140ee138f2 2023-07-24 18:11:01,998 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7f545c3eac232507: from storage DS-8f6522c6-db65-4c78-ad75-6ae9c916d346 node DatanodeRegistration(127.0.0.1:37471, datanodeUuid=efd52962-d7a6-4f4d-9e37-fe140ee138f2, infoPort=41025, infoSecurePort=0, ipcPort=33497, storageInfo=lv=-57;cid=testClusterID;nsid=29072327;c=1690222261555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:11:01,998 INFO [Listener at localhost/33497] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42373 2023-07-24 18:11:02,008 WARN [Listener at localhost/36565] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:11:02,021 WARN [Listener at localhost/36565] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-24 18:11:02,023 WARN [Listener at localhost/36565] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-24 18:11:02,024 INFO [Listener at localhost/36565] log.Slf4jLog(67): jetty-6.1.26 2023-07-24 18:11:02,029 INFO [Listener at localhost/36565] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/Jetty_localhost_45727_datanode____nq8ide/webapp 2023-07-24 18:11:02,143 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x25641f5c22b72e9e: Processing first storage report for DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae from datanode b4f805ea-43ae-4482-b78d-3158d061c2ed 2023-07-24 18:11:02,143 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x25641f5c22b72e9e: from storage DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae node DatanodeRegistration(127.0.0.1:42231, datanodeUuid=b4f805ea-43ae-4482-b78d-3158d061c2ed, infoPort=33163, infoSecurePort=0, ipcPort=36565, storageInfo=lv=-57;cid=testClusterID;nsid=29072327;c=1690222261555), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:11:02,143 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x25641f5c22b72e9e: Processing first storage report for DS-d8d361df-3299-4208-addb-bee7911d8fec from datanode b4f805ea-43ae-4482-b78d-3158d061c2ed 2023-07-24 18:11:02,143 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x25641f5c22b72e9e: from storage DS-d8d361df-3299-4208-addb-bee7911d8fec node DatanodeRegistration(127.0.0.1:42231, datanodeUuid=b4f805ea-43ae-4482-b78d-3158d061c2ed, infoPort=33163, infoSecurePort=0, ipcPort=36565, storageInfo=lv=-57;cid=testClusterID;nsid=29072327;c=1690222261555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:11:02,162 INFO [Listener at localhost/36565] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45727 2023-07-24 18:11:02,173 WARN [Listener at localhost/45633] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-24 18:11:02,298 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa0cb110d70b3df8d: Processing first storage report for DS-dedd1d5f-2249-49d5-974b-4438c709f00b from datanode 387f921a-20de-4d09-bc46-0213032753c9 2023-07-24 18:11:02,299 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa0cb110d70b3df8d: from storage DS-dedd1d5f-2249-49d5-974b-4438c709f00b node DatanodeRegistration(127.0.0.1:46613, datanodeUuid=387f921a-20de-4d09-bc46-0213032753c9, infoPort=32769, infoSecurePort=0, ipcPort=45633, storageInfo=lv=-57;cid=testClusterID;nsid=29072327;c=1690222261555), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-24 18:11:02,299 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa0cb110d70b3df8d: Processing first storage report for DS-f1b81a52-2e1f-4a56-9e91-2bc931dbdee3 from datanode 387f921a-20de-4d09-bc46-0213032753c9 2023-07-24 18:11:02,299 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa0cb110d70b3df8d: from storage DS-f1b81a52-2e1f-4a56-9e91-2bc931dbdee3 node DatanodeRegistration(127.0.0.1:46613, datanodeUuid=387f921a-20de-4d09-bc46-0213032753c9, infoPort=32769, infoSecurePort=0, ipcPort=45633, storageInfo=lv=-57;cid=testClusterID;nsid=29072327;c=1690222261555), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-24 18:11:02,395 DEBUG [Listener at localhost/45633] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76 2023-07-24 18:11:02,404 INFO [Listener at localhost/45633] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/zookeeper_0, clientPort=56931, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-24 18:11:02,406 INFO [Listener at localhost/45633] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56931 2023-07-24 18:11:02,406 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,407 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,432 INFO [Listener at localhost/45633] util.FSUtils(471): Created version file at hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c with version=8 2023-07-24 18:11:02,432 INFO [Listener at localhost/45633] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:44625/user/jenkins/test-data/3774405a-bc55-b0f4-eba1-5948df39d27f/hbase-staging 2023-07-24 18:11:02,433 DEBUG [Listener at localhost/45633] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-24 18:11:02,433 DEBUG [Listener at localhost/45633] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-24 18:11:02,433 DEBUG [Listener at localhost/45633] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-24 18:11:02,433 DEBUG [Listener at localhost/45633] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-24 18:11:02,434 INFO [Listener at localhost/45633] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:02,434 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,434 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,434 INFO [Listener at localhost/45633] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:02,435 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,435 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:02,435 INFO [Listener at localhost/45633] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:02,435 INFO [Listener at localhost/45633] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36991 2023-07-24 18:11:02,436 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,437 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,438 INFO [Listener at localhost/45633] zookeeper.RecoverableZooKeeper(93): Process identifier=master:36991 connecting to ZooKeeper ensemble=127.0.0.1:56931 2023-07-24 18:11:02,445 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:369910x0, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:02,446 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:36991-0x10198877b860000 connected 2023-07-24 18:11:02,463 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:02,463 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:02,464 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:02,467 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36991 2023-07-24 18:11:02,467 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36991 2023-07-24 18:11:02,468 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36991 2023-07-24 18:11:02,469 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36991 2023-07-24 18:11:02,470 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36991 2023-07-24 18:11:02,472 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:02,472 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:02,472 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:02,473 INFO [Listener at localhost/45633] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-24 18:11:02,473 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:02,473 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:02,473 INFO [Listener at localhost/45633] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:02,474 INFO [Listener at localhost/45633] http.HttpServer(1146): Jetty bound to port 38029 2023-07-24 18:11:02,474 INFO [Listener at localhost/45633] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:02,485 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,485 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@147da2bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:02,486 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,486 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6994f8ba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:02,610 INFO [Listener at localhost/45633] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:02,611 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:02,611 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:02,611 INFO [Listener at localhost/45633] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:11:02,612 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,613 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@54eb7694{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/jetty-0_0_0_0-38029-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7757214739115986422/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:02,614 INFO [Listener at localhost/45633] server.AbstractConnector(333): Started ServerConnector@399a6c{HTTP/1.1, (http/1.1)}{0.0.0.0:38029} 2023-07-24 18:11:02,614 INFO [Listener at localhost/45633] server.Server(415): Started @42673ms 2023-07-24 18:11:02,614 INFO [Listener at localhost/45633] master.HMaster(444): hbase.rootdir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c, hbase.cluster.distributed=false 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:02,628 INFO [Listener at localhost/45633] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:02,629 INFO [Listener at localhost/45633] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35379 2023-07-24 18:11:02,629 INFO [Listener at localhost/45633] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:02,631 DEBUG [Listener at localhost/45633] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:02,632 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,633 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,634 INFO [Listener at localhost/45633] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35379 connecting to ZooKeeper ensemble=127.0.0.1:56931 2023-07-24 18:11:02,641 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:353790x0, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:02,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35379-0x10198877b860001 connected 2023-07-24 18:11:02,643 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:02,644 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:02,644 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:02,645 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35379 2023-07-24 18:11:02,645 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35379 2023-07-24 18:11:02,649 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35379 2023-07-24 18:11:02,650 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35379 2023-07-24 18:11:02,650 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35379 2023-07-24 18:11:02,652 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:02,652 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:02,653 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:02,653 INFO [Listener at localhost/45633] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:02,653 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:02,654 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:02,654 INFO [Listener at localhost/45633] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:02,655 INFO [Listener at localhost/45633] http.HttpServer(1146): Jetty bound to port 42497 2023-07-24 18:11:02,655 INFO [Listener at localhost/45633] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:02,656 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,657 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e767489{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:02,657 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,657 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6344f3ff{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:02,790 INFO [Listener at localhost/45633] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:02,790 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:02,790 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:02,791 INFO [Listener at localhost/45633] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:11:02,791 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,792 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5e71051c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/jetty-0_0_0_0-42497-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7811949272085425017/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:02,793 INFO [Listener at localhost/45633] server.AbstractConnector(333): Started ServerConnector@5f7a00b2{HTTP/1.1, (http/1.1)}{0.0.0.0:42497} 2023-07-24 18:11:02,794 INFO [Listener at localhost/45633] server.Server(415): Started @42852ms 2023-07-24 18:11:02,805 INFO [Listener at localhost/45633] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:02,805 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,805 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,805 INFO [Listener at localhost/45633] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:02,805 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,805 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:02,806 INFO [Listener at localhost/45633] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:02,806 INFO [Listener at localhost/45633] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38391 2023-07-24 18:11:02,807 INFO [Listener at localhost/45633] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:02,808 DEBUG [Listener at localhost/45633] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:02,809 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,810 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,811 INFO [Listener at localhost/45633] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38391 connecting to ZooKeeper ensemble=127.0.0.1:56931 2023-07-24 18:11:02,815 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:383910x0, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:02,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38391-0x10198877b860002 connected 2023-07-24 18:11:02,816 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:02,817 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:02,817 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:02,818 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38391 2023-07-24 18:11:02,818 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38391 2023-07-24 18:11:02,818 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38391 2023-07-24 18:11:02,818 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38391 2023-07-24 18:11:02,821 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38391 2023-07-24 18:11:02,822 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:02,823 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:02,823 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:02,823 INFO [Listener at localhost/45633] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:02,823 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:02,823 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:02,823 INFO [Listener at localhost/45633] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:02,824 INFO [Listener at localhost/45633] http.HttpServer(1146): Jetty bound to port 33241 2023-07-24 18:11:02,824 INFO [Listener at localhost/45633] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:02,825 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,825 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@802ef93{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:02,825 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,826 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1799bcfb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:02,947 INFO [Listener at localhost/45633] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:02,948 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:02,948 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:02,948 INFO [Listener at localhost/45633] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:02,949 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,950 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3c048bae{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/jetty-0_0_0_0-33241-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1640843110698743075/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:02,952 INFO [Listener at localhost/45633] server.AbstractConnector(333): Started ServerConnector@2d998fb6{HTTP/1.1, (http/1.1)}{0.0.0.0:33241} 2023-07-24 18:11:02,952 INFO [Listener at localhost/45633] server.Server(415): Started @43011ms 2023-07-24 18:11:02,966 INFO [Listener at localhost/45633] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:02,966 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,966 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,967 INFO [Listener at localhost/45633] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:02,967 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:02,967 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:02,967 INFO [Listener at localhost/45633] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:02,968 INFO [Listener at localhost/45633] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41307 2023-07-24 18:11:02,968 INFO [Listener at localhost/45633] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:02,970 DEBUG [Listener at localhost/45633] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:02,971 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,972 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:02,973 INFO [Listener at localhost/45633] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41307 connecting to ZooKeeper ensemble=127.0.0.1:56931 2023-07-24 18:11:02,976 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:413070x0, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:02,978 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:413070x0, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:02,978 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41307-0x10198877b860003 connected 2023-07-24 18:11:02,978 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:02,979 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:02,982 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41307 2023-07-24 18:11:02,983 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41307 2023-07-24 18:11:02,989 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41307 2023-07-24 18:11:02,990 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41307 2023-07-24 18:11:02,990 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41307 2023-07-24 18:11:02,991 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:02,992 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:02,992 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:02,992 INFO [Listener at localhost/45633] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:02,992 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:02,992 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:02,992 INFO [Listener at localhost/45633] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:02,993 INFO [Listener at localhost/45633] http.HttpServer(1146): Jetty bound to port 35101 2023-07-24 18:11:02,993 INFO [Listener at localhost/45633] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:02,998 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,999 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@478dee5b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:02,999 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:02,999 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6499b8a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:03,121 INFO [Listener at localhost/45633] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:03,122 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:03,122 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:03,123 INFO [Listener at localhost/45633] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-24 18:11:03,124 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:03,125 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3324e9d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/jetty-0_0_0_0-35101-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1602208506966843843/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:03,127 INFO [Listener at localhost/45633] server.AbstractConnector(333): Started ServerConnector@444ab1ff{HTTP/1.1, (http/1.1)}{0.0.0.0:35101} 2023-07-24 18:11:03,127 INFO [Listener at localhost/45633] server.Server(415): Started @43186ms 2023-07-24 18:11:03,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:03,135 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@891e88{HTTP/1.1, (http/1.1)}{0.0.0.0:40979} 2023-07-24 18:11:03,135 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43193ms 2023-07-24 18:11:03,135 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,136 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:11:03,137 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,139 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:03,139 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:03,139 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:03,140 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,139 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:03,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:03,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,36991,1690222262434 from backup master directory 2023-07-24 18:11:03,145 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:03,146 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,146 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-24 18:11:03,146 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:03,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,180 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/hbase.id with ID: 347b5a29-c4df-44ed-8790-7225812d4747 2023-07-24 18:11:03,193 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:03,195 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,216 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5a647a4e to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:03,228 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@e1e3795, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:03,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:03,229 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-24 18:11:03,230 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:03,232 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store-tmp 2023-07-24 18:11:03,254 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:03,254 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:11:03,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:03,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:03,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:11:03,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:03,255 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:03,255 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:03,256 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/WALs/jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36991%2C1690222262434, suffix=, logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/WALs/jenkins-hbase4.apache.org,36991,1690222262434, archiveDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/oldWALs, maxLogs=10 2023-07-24 18:11:03,284 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK] 2023-07-24 18:11:03,284 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK] 2023-07-24 18:11:03,285 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK] 2023-07-24 18:11:03,300 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/WALs/jenkins-hbase4.apache.org,36991,1690222262434/jenkins-hbase4.apache.org%2C36991%2C1690222262434.1690222263259 2023-07-24 18:11:03,302 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK], DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK], DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK]] 2023-07-24 18:11:03,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:03,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:03,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:03,303 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:03,311 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:03,312 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-24 18:11:03,312 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-24 18:11:03,315 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:03,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:03,319 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-24 18:11:03,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:11:03,323 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10632955520, jitterRate=-0.009728848934173584}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:03,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:03,326 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-24 18:11:03,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-24 18:11:03,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-24 18:11:03,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-24 18:11:03,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-24 18:11:03,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-24 18:11:03,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-24 18:11:03,335 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-24 18:11:03,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-24 18:11:03,337 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-24 18:11:03,337 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-24 18:11:03,338 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-24 18:11:03,344 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-24 18:11:03,345 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-24 18:11:03,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-24 18:11:03,348 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:03,348 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:03,348 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:03,348 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,351 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,36991,1690222262434, sessionid=0x10198877b860000, setting cluster-up flag (Was=false) 2023-07-24 18:11:03,354 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:03,355 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,366 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-24 18:11:03,367 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,371 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,375 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-24 18:11:03,376 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:03,377 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.hbase-snapshot/.tmp 2023-07-24 18:11:03,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-24 18:11:03,379 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-24 18:11:03,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-24 18:11:03,381 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:03,381 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-24 18:11:03,382 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-24 18:11:03,393 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:11:03,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:11:03,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-24 18:11:03,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:03,394 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690222293403 2023-07-24 18:11:03,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-24 18:11:03,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-24 18:11:03,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-24 18:11:03,403 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-24 18:11:03,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-24 18:11:03,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-24 18:11:03,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,404 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:11:03,404 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-24 18:11:03,404 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-24 18:11:03,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-24 18:11:03,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-24 18:11:03,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-24 18:11:03,405 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-24 18:11:03,406 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:03,406 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222263406,5,FailOnTimeoutGroup] 2023-07-24 18:11:03,411 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222263406,5,FailOnTimeoutGroup] 2023-07-24 18:11:03,411 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-24 18:11:03,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,413 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,427 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:11:03,428 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-24 18:11:03,428 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c 2023-07-24 18:11:03,431 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(951): ClusterId : 347b5a29-c4df-44ed-8790-7225812d4747 2023-07-24 18:11:03,431 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(951): ClusterId : 347b5a29-c4df-44ed-8790-7225812d4747 2023-07-24 18:11:03,431 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(951): ClusterId : 347b5a29-c4df-44ed-8790-7225812d4747 2023-07-24 18:11:03,431 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:03,431 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:03,431 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:03,434 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:03,434 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:03,434 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:03,434 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:03,434 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:03,434 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:03,436 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:03,436 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:03,436 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:03,441 DEBUG [RS:0;jenkins-hbase4:35379] zookeeper.ReadOnlyZKClient(139): Connect 0x7651fb37 to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:03,443 DEBUG [RS:1;jenkins-hbase4:38391] zookeeper.ReadOnlyZKClient(139): Connect 0x00d12096 to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:03,443 DEBUG [RS:2;jenkins-hbase4:41307] zookeeper.ReadOnlyZKClient(139): Connect 0x6a04b9d9 to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:03,447 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:03,461 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:11:03,462 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/info 2023-07-24 18:11:03,463 DEBUG [RS:0;jenkins-hbase4:35379] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@78961b8d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:03,463 DEBUG [RS:1;jenkins-hbase4:38391] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@11026fb9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:03,463 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:11:03,463 DEBUG [RS:1;jenkins-hbase4:38391] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@319b8676, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:03,463 DEBUG [RS:0;jenkins-hbase4:35379] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7ca99047, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:03,463 DEBUG [RS:2;jenkins-hbase4:41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3d64b285, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:03,463 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,464 DEBUG [RS:2;jenkins-hbase4:41307] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b24b068, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:03,464 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:11:03,465 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:03,465 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:11:03,466 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,466 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:11:03,467 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/table 2023-07-24 18:11:03,468 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:11:03,468 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,469 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740 2023-07-24 18:11:03,469 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740 2023-07-24 18:11:03,471 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:11:03,475 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35379 2023-07-24 18:11:03,475 INFO [RS:0;jenkins-hbase4:35379] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:03,475 INFO [RS:0;jenkins-hbase4:35379] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:03,475 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:03,475 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38391 2023-07-24 18:11:03,475 INFO [RS:1;jenkins-hbase4:38391] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:03,475 INFO [RS:1;jenkins-hbase4:38391] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:03,475 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:03,475 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41307 2023-07-24 18:11:03,475 INFO [RS:2;jenkins-hbase4:41307] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:03,475 INFO [RS:2;jenkins-hbase4:41307] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:03,475 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:03,476 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36991,1690222262434 with isa=jenkins-hbase4.apache.org/172.31.14.131:35379, startcode=1690222262627 2023-07-24 18:11:03,476 DEBUG [RS:0;jenkins-hbase4:35379] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:03,476 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36991,1690222262434 with isa=jenkins-hbase4.apache.org/172.31.14.131:41307, startcode=1690222262965 2023-07-24 18:11:03,476 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36991,1690222262434 with isa=jenkins-hbase4.apache.org/172.31.14.131:38391, startcode=1690222262805 2023-07-24 18:11:03,476 DEBUG [RS:2;jenkins-hbase4:41307] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:03,476 DEBUG [RS:1;jenkins-hbase4:38391] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:03,478 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36295, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:03,478 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36317, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:03,478 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46065, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:03,480 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36991] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,480 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:03,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-24 18:11:03,481 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36991] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:03,481 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c 2023-07-24 18:11:03,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-24 18:11:03,481 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36991] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,481 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33823 2023-07-24 18:11:03,481 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:03,481 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38029 2023-07-24 18:11:03,482 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-24 18:11:03,482 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c 2023-07-24 18:11:03,482 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c 2023-07-24 18:11:03,482 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33823 2023-07-24 18:11:03,482 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33823 2023-07-24 18:11:03,482 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38029 2023-07-24 18:11:03,482 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38029 2023-07-24 18:11:03,483 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:03,483 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:11:03,489 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:11:03,489 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11244794560, jitterRate=0.04725310206413269}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:11:03,489 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:11:03,489 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:11:03,489 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:11:03,489 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:11:03,489 DEBUG [RS:0;jenkins-hbase4:35379] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,490 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35379,1690222262627] 2023-07-24 18:11:03,490 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41307,1690222262965] 2023-07-24 18:11:03,490 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38391,1690222262805] 2023-07-24 18:11:03,490 WARN [RS:0;jenkins-hbase4:35379] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:03,489 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:11:03,490 INFO [RS:0;jenkins-hbase4:35379] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:03,490 DEBUG [RS:1;jenkins-hbase4:38391] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,491 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,491 WARN [RS:1;jenkins-hbase4:38391] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:03,491 DEBUG [RS:2;jenkins-hbase4:41307] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,490 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:11:03,491 WARN [RS:2;jenkins-hbase4:41307] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:03,491 INFO [RS:1;jenkins-hbase4:38391] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:03,491 INFO [RS:2;jenkins-hbase4:41307] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:03,491 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,491 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,497 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:03,498 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:11:03,500 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-24 18:11:03,500 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-24 18:11:03,502 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-24 18:11:03,504 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-24 18:11:03,505 DEBUG [RS:1;jenkins-hbase4:38391] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,505 DEBUG [RS:0;jenkins-hbase4:35379] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,505 DEBUG [RS:2;jenkins-hbase4:41307] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,505 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-24 18:11:03,506 DEBUG [RS:1;jenkins-hbase4:38391] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,506 DEBUG [RS:0;jenkins-hbase4:35379] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,506 DEBUG [RS:1;jenkins-hbase4:38391] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,506 DEBUG [RS:2;jenkins-hbase4:41307] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,506 DEBUG [RS:0;jenkins-hbase4:35379] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,507 DEBUG [RS:2;jenkins-hbase4:41307] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,507 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:03,507 INFO [RS:1;jenkins-hbase4:38391] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:03,507 DEBUG [RS:0;jenkins-hbase4:35379] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:03,507 DEBUG [RS:2;jenkins-hbase4:41307] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:03,508 INFO [RS:0;jenkins-hbase4:35379] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:03,508 INFO [RS:2;jenkins-hbase4:41307] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:03,508 INFO [RS:1;jenkins-hbase4:38391] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:03,508 INFO [RS:1;jenkins-hbase4:38391] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:03,509 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,509 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:03,510 INFO [RS:0;jenkins-hbase4:35379] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:03,511 INFO [RS:2;jenkins-hbase4:41307] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:03,511 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,511 INFO [RS:0;jenkins-hbase4:35379] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:03,512 INFO [RS:2;jenkins-hbase4:41307] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,512 DEBUG [RS:1;jenkins-hbase4:38391] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,513 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:03,513 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:03,522 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,523 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,523 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,523 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,524 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,524 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,524 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,524 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,524 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:0;jenkins-hbase4:35379] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,525 DEBUG [RS:2;jenkins-hbase4:41307] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:03,528 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,528 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,528 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,534 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,535 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,535 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,540 INFO [RS:1;jenkins-hbase4:38391] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:03,540 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38391,1690222262805-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,545 INFO [RS:2;jenkins-hbase4:41307] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:03,545 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41307,1690222262965-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,550 INFO [RS:0;jenkins-hbase4:35379] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:03,550 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35379,1690222262627-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,555 INFO [RS:2;jenkins-hbase4:41307] regionserver.Replication(203): jenkins-hbase4.apache.org,41307,1690222262965 started 2023-07-24 18:11:03,555 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41307,1690222262965, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41307, sessionid=0x10198877b860003 2023-07-24 18:11:03,555 INFO [RS:1;jenkins-hbase4:38391] regionserver.Replication(203): jenkins-hbase4.apache.org,38391,1690222262805 started 2023-07-24 18:11:03,555 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:03,555 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38391,1690222262805, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38391, sessionid=0x10198877b860002 2023-07-24 18:11:03,555 DEBUG [RS:2;jenkins-hbase4:41307] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,555 DEBUG [RS:2;jenkins-hbase4:41307] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41307,1690222262965' 2023-07-24 18:11:03,555 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:03,555 DEBUG [RS:1;jenkins-hbase4:38391] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,555 DEBUG [RS:1;jenkins-hbase4:38391] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38391,1690222262805' 2023-07-24 18:11:03,555 DEBUG [RS:1;jenkins-hbase4:38391] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:03,555 DEBUG [RS:2;jenkins-hbase4:41307] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:03,556 DEBUG [RS:1;jenkins-hbase4:38391] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:03,556 DEBUG [RS:2;jenkins-hbase4:41307] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:03,556 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:03,556 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:03,556 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:03,556 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:03,556 DEBUG [RS:2;jenkins-hbase4:41307] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:03,556 DEBUG [RS:1;jenkins-hbase4:38391] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,557 DEBUG [RS:1;jenkins-hbase4:38391] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38391,1690222262805' 2023-07-24 18:11:03,556 DEBUG [RS:2;jenkins-hbase4:41307] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41307,1690222262965' 2023-07-24 18:11:03,557 DEBUG [RS:2;jenkins-hbase4:41307] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:03,557 DEBUG [RS:1;jenkins-hbase4:38391] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:03,557 DEBUG [RS:2;jenkins-hbase4:41307] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:03,557 DEBUG [RS:1;jenkins-hbase4:38391] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:03,557 DEBUG [RS:2;jenkins-hbase4:41307] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:03,557 DEBUG [RS:1;jenkins-hbase4:38391] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:03,557 INFO [RS:1;jenkins-hbase4:38391] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:03,557 INFO [RS:2;jenkins-hbase4:41307] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:03,557 INFO [RS:2;jenkins-hbase4:41307] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:03,557 INFO [RS:1;jenkins-hbase4:38391] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:03,562 INFO [RS:0;jenkins-hbase4:35379] regionserver.Replication(203): jenkins-hbase4.apache.org,35379,1690222262627 started 2023-07-24 18:11:03,562 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35379,1690222262627, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35379, sessionid=0x10198877b860001 2023-07-24 18:11:03,562 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:03,562 DEBUG [RS:0;jenkins-hbase4:35379] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35379,1690222262627' 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35379,1690222262627' 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:03,563 DEBUG [RS:0;jenkins-hbase4:35379] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:03,564 DEBUG [RS:0;jenkins-hbase4:35379] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:03,564 INFO [RS:0;jenkins-hbase4:35379] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:03,564 INFO [RS:0;jenkins-hbase4:35379] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:03,623 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-24 18:11:03,656 DEBUG [jenkins-hbase4:36991] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-24 18:11:03,656 DEBUG [jenkins-hbase4:36991] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:03,656 DEBUG [jenkins-hbase4:36991] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:03,656 DEBUG [jenkins-hbase4:36991] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:03,656 DEBUG [jenkins-hbase4:36991] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:03,656 DEBUG [jenkins-hbase4:36991] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:03,657 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38391,1690222262805, state=OPENING 2023-07-24 18:11:03,659 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-24 18:11:03,659 INFO [RS:2;jenkins-hbase4:41307] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41307%2C1690222262965, suffix=, logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,41307,1690222262965, archiveDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs, maxLogs=32 2023-07-24 18:11:03,659 INFO [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38391%2C1690222262805, suffix=, logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,38391,1690222262805, archiveDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs, maxLogs=32 2023-07-24 18:11:03,660 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:03,660 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:11:03,661 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38391,1690222262805}] 2023-07-24 18:11:03,666 INFO [RS:0;jenkins-hbase4:35379] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35379%2C1690222262627, suffix=, logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,35379,1690222262627, archiveDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs, maxLogs=32 2023-07-24 18:11:03,685 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK] 2023-07-24 18:11:03,696 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK] 2023-07-24 18:11:03,697 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK] 2023-07-24 18:11:03,697 WARN [ReadOnlyZKClient-127.0.0.1:56931@0x5a647a4e] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-24 18:11:03,697 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK] 2023-07-24 18:11:03,697 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:03,697 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK] 2023-07-24 18:11:03,698 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK] 2023-07-24 18:11:03,704 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK] 2023-07-24 18:11:03,704 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34474, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:03,705 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38391] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:34474 deadline: 1690222323704, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,711 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK] 2023-07-24 18:11:03,712 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK] 2023-07-24 18:11:03,718 INFO [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,38391,1690222262805/jenkins-hbase4.apache.org%2C38391%2C1690222262805.1690222263663 2023-07-24 18:11:03,718 INFO [RS:0;jenkins-hbase4:35379] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,35379,1690222262627/jenkins-hbase4.apache.org%2C35379%2C1690222262627.1690222263667 2023-07-24 18:11:03,718 DEBUG [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK], DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK], DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK]] 2023-07-24 18:11:03,719 INFO [RS:2;jenkins-hbase4:41307] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,41307,1690222262965/jenkins-hbase4.apache.org%2C41307%2C1690222262965.1690222263662 2023-07-24 18:11:03,722 DEBUG [RS:0;jenkins-hbase4:35379] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK], DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK], DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK]] 2023-07-24 18:11:03,724 DEBUG [RS:2;jenkins-hbase4:41307] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK], DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK], DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK]] 2023-07-24 18:11:03,819 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:03,821 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:03,822 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34486, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:03,826 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-24 18:11:03,826 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:03,827 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38391%2C1690222262805.meta, suffix=.meta, logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,38391,1690222262805, archiveDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs, maxLogs=32 2023-07-24 18:11:03,845 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK] 2023-07-24 18:11:03,845 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK] 2023-07-24 18:11:03,846 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK] 2023-07-24 18:11:03,848 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,38391,1690222262805/jenkins-hbase4.apache.org%2C38391%2C1690222262805.meta.1690222263828.meta 2023-07-24 18:11:03,850 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK], DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK], DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK]] 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-24 18:11:03,851 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-24 18:11:03,851 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-24 18:11:03,852 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-24 18:11:03,854 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/info 2023-07-24 18:11:03,854 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/info 2023-07-24 18:11:03,854 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-24 18:11:03,855 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,855 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-24 18:11:03,856 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:03,856 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/rep_barrier 2023-07-24 18:11:03,856 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-24 18:11:03,856 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,857 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-24 18:11:03,857 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/table 2023-07-24 18:11:03,857 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/table 2023-07-24 18:11:03,857 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-24 18:11:03,858 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:03,859 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740 2023-07-24 18:11:03,860 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740 2023-07-24 18:11:03,861 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-24 18:11:03,862 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-24 18:11:03,863 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9888778400, jitterRate=-0.07903574407100677}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-24 18:11:03,863 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-24 18:11:03,864 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690222263819 2023-07-24 18:11:03,868 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-24 18:11:03,869 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-24 18:11:03,869 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38391,1690222262805, state=OPEN 2023-07-24 18:11:03,871 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-24 18:11:03,871 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-24 18:11:03,872 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-24 18:11:03,872 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38391,1690222262805 in 211 msec 2023-07-24 18:11:03,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-24 18:11:03,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 370 msec 2023-07-24 18:11:03,875 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 493 msec 2023-07-24 18:11:03,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690222263875, completionTime=-1 2023-07-24 18:11:03,876 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-24 18:11:03,876 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-24 18:11:03,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-24 18:11:03,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690222323880 2023-07-24 18:11:03,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690222383880 2023-07-24 18:11:03,880 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-24 18:11:03,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36991,1690222262434-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36991,1690222262434-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36991,1690222262434-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:36991, period=300000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-24 18:11:03,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:03,888 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-24 18:11:03,888 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-24 18:11:03,890 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:11:03,890 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:11:03,892 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:03,892 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50 empty. 2023-07-24 18:11:03,892 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:03,892 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-24 18:11:03,904 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-24 18:11:03,905 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8499050dc118b7510fe1f9c83ad81c50, NAME => 'hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp 2023-07-24 18:11:03,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:03,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 8499050dc118b7510fe1f9c83ad81c50, disabling compactions & flushes 2023-07-24 18:11:03,913 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:03,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:03,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. after waiting 0 ms 2023-07-24 18:11:03,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:03,913 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:03,913 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 8499050dc118b7510fe1f9c83ad81c50: 2023-07-24 18:11:03,915 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:11:03,916 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222263916"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222263916"}]},"ts":"1690222263916"} 2023-07-24 18:11:03,918 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:11:03,919 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:11:03,919 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222263919"}]},"ts":"1690222263919"} 2023-07-24 18:11:03,920 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-24 18:11:03,922 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:03,923 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:03,923 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:03,923 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:03,923 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:03,923 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8499050dc118b7510fe1f9c83ad81c50, ASSIGN}] 2023-07-24 18:11:03,925 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=8499050dc118b7510fe1f9c83ad81c50, ASSIGN 2023-07-24 18:11:03,925 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=8499050dc118b7510fe1f9c83ad81c50, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38391,1690222262805; forceNewPlan=false, retain=false 2023-07-24 18:11:04,006 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:04,008 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-24 18:11:04,010 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:11:04,011 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:11:04,012 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,013 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14 empty. 2023-07-24 18:11:04,013 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,013 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-24 18:11:04,023 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-24 18:11:04,024 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a19b2f2bc597559d6e5be813a2e02e14, NAME => 'hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp 2023-07-24 18:11:04,035 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:04,035 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a19b2f2bc597559d6e5be813a2e02e14, disabling compactions & flushes 2023-07-24 18:11:04,035 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,035 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,035 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. after waiting 0 ms 2023-07-24 18:11:04,035 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,035 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,035 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a19b2f2bc597559d6e5be813a2e02e14: 2023-07-24 18:11:04,037 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:11:04,038 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222264038"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222264038"}]},"ts":"1690222264038"} 2023-07-24 18:11:04,040 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:11:04,041 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:11:04,041 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222264041"}]},"ts":"1690222264041"} 2023-07-24 18:11:04,044 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-24 18:11:04,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:04,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:04,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:04,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:04,052 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:04,052 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a19b2f2bc597559d6e5be813a2e02e14, ASSIGN}] 2023-07-24 18:11:04,053 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a19b2f2bc597559d6e5be813a2e02e14, ASSIGN 2023-07-24 18:11:04,054 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a19b2f2bc597559d6e5be813a2e02e14, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38391,1690222262805; forceNewPlan=false, retain=false 2023-07-24 18:11:04,054 INFO [jenkins-hbase4:36991] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-24 18:11:04,056 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8499050dc118b7510fe1f9c83ad81c50, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,056 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222264056"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222264056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222264056"}]},"ts":"1690222264056"} 2023-07-24 18:11:04,056 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a19b2f2bc597559d6e5be813a2e02e14, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,057 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222264056"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222264056"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222264056"}]},"ts":"1690222264056"} 2023-07-24 18:11:04,058 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 8499050dc118b7510fe1f9c83ad81c50, server=jenkins-hbase4.apache.org,38391,1690222262805}] 2023-07-24 18:11:04,058 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure a19b2f2bc597559d6e5be813a2e02e14, server=jenkins-hbase4.apache.org,38391,1690222262805}] 2023-07-24 18:11:04,213 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a19b2f2bc597559d6e5be813a2e02e14, NAME => 'hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. service=MultiRowMutationService 2023-07-24 18:11:04,213 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,213 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,215 INFO [StoreOpener-a19b2f2bc597559d6e5be813a2e02e14-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,216 DEBUG [StoreOpener-a19b2f2bc597559d6e5be813a2e02e14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/m 2023-07-24 18:11:04,216 DEBUG [StoreOpener-a19b2f2bc597559d6e5be813a2e02e14-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/m 2023-07-24 18:11:04,216 INFO [StoreOpener-a19b2f2bc597559d6e5be813a2e02e14-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a19b2f2bc597559d6e5be813a2e02e14 columnFamilyName m 2023-07-24 18:11:04,217 INFO [StoreOpener-a19b2f2bc597559d6e5be813a2e02e14-1] regionserver.HStore(310): Store=a19b2f2bc597559d6e5be813a2e02e14/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:04,217 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,218 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:04,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:11:04,222 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a19b2f2bc597559d6e5be813a2e02e14; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@129eebe1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:04,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a19b2f2bc597559d6e5be813a2e02e14: 2023-07-24 18:11:04,223 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14., pid=9, masterSystemTime=1690222264209 2023-07-24 18:11:04,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,225 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:04,225 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:04,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8499050dc118b7510fe1f9c83ad81c50, NAME => 'hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:04,225 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a19b2f2bc597559d6e5be813a2e02e14, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,226 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690222264225"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222264225"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222264225"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222264225"}]},"ts":"1690222264225"} 2023-07-24 18:11:04,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:04,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,227 INFO [StoreOpener-8499050dc118b7510fe1f9c83ad81c50-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,228 DEBUG [StoreOpener-8499050dc118b7510fe1f9c83ad81c50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/info 2023-07-24 18:11:04,228 DEBUG [StoreOpener-8499050dc118b7510fe1f9c83ad81c50-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/info 2023-07-24 18:11:04,229 INFO [StoreOpener-8499050dc118b7510fe1f9c83ad81c50-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8499050dc118b7510fe1f9c83ad81c50 columnFamilyName info 2023-07-24 18:11:04,229 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-24 18:11:04,229 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure a19b2f2bc597559d6e5be813a2e02e14, server=jenkins-hbase4.apache.org,38391,1690222262805 in 169 msec 2023-07-24 18:11:04,229 INFO [StoreOpener-8499050dc118b7510fe1f9c83ad81c50-1] regionserver.HStore(310): Store=8499050dc118b7510fe1f9c83ad81c50/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:04,230 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,230 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,231 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-24 18:11:04,231 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a19b2f2bc597559d6e5be813a2e02e14, ASSIGN in 177 msec 2023-07-24 18:11:04,231 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:11:04,232 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222264231"}]},"ts":"1690222264231"} 2023-07-24 18:11:04,233 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-24 18:11:04,233 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:04,235 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:11:04,235 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:11:04,236 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8499050dc118b7510fe1f9c83ad81c50; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10442608160, jitterRate=-0.02745632827281952}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:04,236 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8499050dc118b7510fe1f9c83ad81c50: 2023-07-24 18:11:04,236 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 228 msec 2023-07-24 18:11:04,236 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50., pid=8, masterSystemTime=1690222264209 2023-07-24 18:11:04,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:04,238 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:04,238 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=8499050dc118b7510fe1f9c83ad81c50, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,238 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690222264238"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222264238"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222264238"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222264238"}]},"ts":"1690222264238"} 2023-07-24 18:11:04,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-24 18:11:04,240 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 8499050dc118b7510fe1f9c83ad81c50, server=jenkins-hbase4.apache.org,38391,1690222262805 in 181 msec 2023-07-24 18:11:04,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-24 18:11:04,242 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=8499050dc118b7510fe1f9c83ad81c50, ASSIGN in 317 msec 2023-07-24 18:11:04,242 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:11:04,243 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222264242"}]},"ts":"1690222264242"} 2023-07-24 18:11:04,243 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-24 18:11:04,245 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:11:04,246 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 358 msec 2023-07-24 18:11:04,289 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-24 18:11:04,291 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:11:04,292 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:04,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-24 18:11:04,303 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:11:04,305 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-24 18:11:04,307 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-24 18:11:04,311 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-24 18:11:04,311 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-24 18:11:04,315 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:11:04,317 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:04,317 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:04,318 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 10 msec 2023-07-24 18:11:04,318 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:11:04,320 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-24 18:11:04,333 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-24 18:11:04,335 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-24 18:11:04,335 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.189sec 2023-07-24 18:11:04,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-24 18:11:04,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-24 18:11:04,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-24 18:11:04,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36991,1690222262434-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-24 18:11:04,336 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36991,1690222262434-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-24 18:11:04,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-24 18:11:04,431 DEBUG [Listener at localhost/45633] zookeeper.ReadOnlyZKClient(139): Connect 0x4c50ab29 to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:04,437 DEBUG [Listener at localhost/45633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@87be5a3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:04,439 DEBUG [hconnection-0x192c0497-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:04,440 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34500, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:04,442 INFO [Listener at localhost/45633] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:04,442 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:04,444 DEBUG [Listener at localhost/45633] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-24 18:11:04,445 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49642, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-24 18:11:04,448 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-24 18:11:04,448 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:04,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-24 18:11:04,449 DEBUG [Listener at localhost/45633] zookeeper.ReadOnlyZKClient(139): Connect 0x0e8df158 to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:04,453 DEBUG [Listener at localhost/45633] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77fc54a2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:04,454 INFO [Listener at localhost/45633] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56931 2023-07-24 18:11:04,457 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:04,459 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x10198877b86000a connected 2023-07-24 18:11:04,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:04,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:04,463 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-24 18:11:04,476 INFO [Listener at localhost/45633] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-24 18:11:04,476 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:04,477 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:04,477 INFO [Listener at localhost/45633] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-24 18:11:04,477 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-24 18:11:04,477 INFO [Listener at localhost/45633] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-24 18:11:04,477 INFO [Listener at localhost/45633] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-24 18:11:04,478 INFO [Listener at localhost/45633] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42795 2023-07-24 18:11:04,478 INFO [Listener at localhost/45633] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-24 18:11:04,479 DEBUG [Listener at localhost/45633] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-24 18:11:04,480 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:04,481 INFO [Listener at localhost/45633] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-24 18:11:04,482 INFO [Listener at localhost/45633] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42795 connecting to ZooKeeper ensemble=127.0.0.1:56931 2023-07-24 18:11:04,485 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:427950x0, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-24 18:11:04,487 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(162): regionserver:427950x0, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-24 18:11:04,487 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42795-0x10198877b86000b connected 2023-07-24 18:11:04,488 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(162): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-24 18:11:04,489 DEBUG [Listener at localhost/45633] zookeeper.ZKUtil(164): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-24 18:11:04,489 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42795 2023-07-24 18:11:04,489 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42795 2023-07-24 18:11:04,489 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42795 2023-07-24 18:11:04,490 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42795 2023-07-24 18:11:04,490 DEBUG [Listener at localhost/45633] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42795 2023-07-24 18:11:04,492 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-24 18:11:04,492 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-24 18:11:04,492 INFO [Listener at localhost/45633] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-24 18:11:04,492 INFO [Listener at localhost/45633] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-24 18:11:04,492 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-24 18:11:04,492 INFO [Listener at localhost/45633] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-24 18:11:04,493 INFO [Listener at localhost/45633] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-24 18:11:04,493 INFO [Listener at localhost/45633] http.HttpServer(1146): Jetty bound to port 40495 2023-07-24 18:11:04,493 INFO [Listener at localhost/45633] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-24 18:11:04,494 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:04,495 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4a3ed2e7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,AVAILABLE} 2023-07-24 18:11:04,495 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:04,495 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@28beacb7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-24 18:11:04,608 INFO [Listener at localhost/45633] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-24 18:11:04,609 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-24 18:11:04,609 INFO [Listener at localhost/45633] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-24 18:11:04,609 INFO [Listener at localhost/45633] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-24 18:11:04,610 INFO [Listener at localhost/45633] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-24 18:11:04,611 INFO [Listener at localhost/45633] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6d18f4b1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/java.io.tmpdir/jetty-0_0_0_0-40495-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5548929529131328774/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:04,612 INFO [Listener at localhost/45633] server.AbstractConnector(333): Started ServerConnector@78269785{HTTP/1.1, (http/1.1)}{0.0.0.0:40495} 2023-07-24 18:11:04,613 INFO [Listener at localhost/45633] server.Server(415): Started @44671ms 2023-07-24 18:11:04,615 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(951): ClusterId : 347b5a29-c4df-44ed-8790-7225812d4747 2023-07-24 18:11:04,615 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-24 18:11:04,617 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-24 18:11:04,617 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-24 18:11:04,619 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-24 18:11:04,622 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ReadOnlyZKClient(139): Connect 0x6a4b14d6 to 127.0.0.1:56931 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-24 18:11:04,627 DEBUG [RS:3;jenkins-hbase4:42795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@648cf7ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-24 18:11:04,627 DEBUG [RS:3;jenkins-hbase4:42795] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@60b37fa6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:04,635 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:42795 2023-07-24 18:11:04,635 INFO [RS:3;jenkins-hbase4:42795] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-24 18:11:04,635 INFO [RS:3;jenkins-hbase4:42795] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-24 18:11:04,635 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1022): About to register with Master. 2023-07-24 18:11:04,636 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,36991,1690222262434 with isa=jenkins-hbase4.apache.org/172.31.14.131:42795, startcode=1690222264476 2023-07-24 18:11:04,636 DEBUG [RS:3;jenkins-hbase4:42795] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-24 18:11:04,638 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59367, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-24 18:11:04,639 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36991] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,639 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-24 18:11:04,639 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c 2023-07-24 18:11:04,639 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33823 2023-07-24 18:11:04,639 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=38029 2023-07-24 18:11:04,644 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:04,644 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:04,644 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:04,644 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:04,644 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:04,644 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ZKUtil(162): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,644 WARN [RS:3;jenkins-hbase4:42795] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-24 18:11:04,644 INFO [RS:3;jenkins-hbase4:42795] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-24 18:11:04,644 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,644 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-24 18:11:04,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:04,645 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42795,1690222264476] 2023-07-24 18:11:04,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:04,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:04,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:04,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:04,646 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,649 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ZKUtil(162): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:04,649 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ZKUtil(162): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:04,649 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ZKUtil(162): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,650 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ZKUtil(162): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:04,650 DEBUG [RS:3;jenkins-hbase4:42795] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-24 18:11:04,650 INFO [RS:3;jenkins-hbase4:42795] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-24 18:11:04,652 INFO [RS:3;jenkins-hbase4:42795] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-24 18:11:04,652 INFO [RS:3;jenkins-hbase4:42795] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-24 18:11:04,652 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:04,652 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-24 18:11:04,653 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:04,653 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,654 DEBUG [RS:3;jenkins-hbase4:42795] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-24 18:11:04,655 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:04,655 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:04,655 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:04,666 INFO [RS:3;jenkins-hbase4:42795] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-24 18:11:04,666 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42795,1690222264476-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-24 18:11:04,677 INFO [RS:3;jenkins-hbase4:42795] regionserver.Replication(203): jenkins-hbase4.apache.org,42795,1690222264476 started 2023-07-24 18:11:04,678 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42795,1690222264476, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42795, sessionid=0x10198877b86000b 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42795,1690222264476' 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-24 18:11:04,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:04,678 DEBUG [RS:3;jenkins-hbase4:42795] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:04,679 DEBUG [RS:3;jenkins-hbase4:42795] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42795,1690222264476' 2023-07-24 18:11:04,679 DEBUG [RS:3;jenkins-hbase4:42795] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-24 18:11:04,679 DEBUG [RS:3;jenkins-hbase4:42795] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-24 18:11:04,679 DEBUG [RS:3;jenkins-hbase4:42795] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-24 18:11:04,679 INFO [RS:3;jenkins-hbase4:42795] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-24 18:11:04,679 INFO [RS:3;jenkins-hbase4:42795] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-24 18:11:04,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:04,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:04,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:04,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:04,686 DEBUG [hconnection-0x28ac0a0-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-24 18:11:04,688 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34506, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-24 18:11:04,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:04,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:04,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:04,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:04,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:49642 deadline: 1690223464695, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:04,696 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:04,697 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:04,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:04,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:04,698 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:04,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:04,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:04,747 INFO [Listener at localhost/45633] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=557 (was 512) Potentially hanging thread: IPC Server handler 1 on default port 33823 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1269543214-2588 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:33823 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@6d0802d6 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@5586f30a[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@22e8892c java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-138d4621-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@70e44792 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:42795Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 45633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@282f5bd4 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35379 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@703c0b8f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1036353412_17 at /127.0.0.1:50962 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:50964 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp772451915-2281 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1776488039-2254 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data4/current/BP-1022496820-172.31.14.131-1690222261555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40097,1690222256671 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:50954 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 45633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1776488039-2253 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,36991,1690222262434 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x6a04b9d9-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: Listener at localhost/45633-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1776488039-2256 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35379-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6c7c4d2a[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1596745683_17 at /127.0.0.1:51750 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117505966-2220 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Session-HouseKeeper-26105268-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117505966-2224 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:40065 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:41307Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x4c50ab29-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:40065 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/42673-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1623744952-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@3a2622e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33823 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1623744952-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117505966-2225 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1596745683_17 at /127.0.0.1:59340 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1776488039-2252-acceptor-0@2ae488ef-ServerConnector@5f7a00b2{HTTP/1.1, (http/1.1)}{0.0.0.0:42497} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33823 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/45633.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: 543024312@qtp-238111672-1 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:35379Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data5/current/BP-1022496820-172.31.14.131-1690222261555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:40065 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:33823 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1623744952-2312-acceptor-0@2ff7d9ec-ServerConnector@444ab1ff{HTTP/1.1, (http/1.1)}{0.0.0.0:35101} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@dcc1e1c[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:38391Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp772451915-2285 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-822041179_17 at /127.0.0.1:50944 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1269543214-2589 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c-prefix:jenkins-hbase4.apache.org,35379,1690222262627 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 0 on default port 33497 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1036353412_17 at /127.0.0.1:59384 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1623744952-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp240054577-2326 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp240054577-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:40065 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:1;jenkins-hbase4:38391-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222263406 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Session-HouseKeeper-34d64535-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x192c0497-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33497 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1776488039-2258 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x5a647a4e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/45633-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x6a4b14d6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1623744952-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:59400 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x28ac0a0-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1261270605@qtp-87974605-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x4c50ab29-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1776488039-2251 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x0e8df158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp240054577-2327-acceptor-0@ad6d516-ServerConnector@891e88{HTTP/1.1, (http/1.1)}{0.0.0.0:40979} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x5a647a4e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData-prefix:jenkins-hbase4.apache.org,36991,1690222262434 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 36565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x6a4b14d6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:33823 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: 1605194283@qtp-863611409-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(1795467913) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x6a04b9d9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1269543214-2587-acceptor-0@3ce70a57-ServerConnector@78269785{HTTP/1.1, (http/1.1)}{0.0.0.0:40495} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33823 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x00d12096-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 36565 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp117505966-2223 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:33823 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x5a647a4e-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 324218602@qtp-1526616028-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/45633-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 45633 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp117505966-2222 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data1/current/BP-1022496820-172.31.14.131-1690222261555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c-prefix:jenkins-hbase4.apache.org,38391,1690222262805 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@7bd9da01 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x0e8df158-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp117505966-2221-acceptor-0@310a4b55-ServerConnector@399a6c{HTTP/1.1, (http/1.1)}{0.0.0.0:38029} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Server handler 1 on default port 33497 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1776488039-2257 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data3/current/BP-1022496820-172.31.14.131-1690222261555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp772451915-2282-acceptor-0@347b66c6-ServerConnector@2d998fb6{HTTP/1.1, (http/1.1)}{0.0.0.0:33241} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@78615954 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:51774 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp117505966-2226 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 45633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: M:0;jenkins-hbase4:36991 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp117505966-2227 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-822041179_17 at /127.0.0.1:59366 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1623744952-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40065 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:40065 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1623744952-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33497 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 0 on default port 33823 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:42795-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57771@0x3c9a544d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:41307 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33497 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp240054577-2325 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x7651fb37-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1269543214-2592 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 33497 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server idle connection scanner for port 33823 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:33823 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS:3;jenkins-hbase4:42795 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3148d48e sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1036353412_17 at /127.0.0.1:51772 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:59414 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 36565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp240054577-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c-prefix:jenkins-hbase4.apache.org,38391,1690222262805.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 1 on default port 45633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1596745683_17 at /127.0.0.1:51720 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57771@0x3c9a544d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp772451915-2283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1596745683_17 at /127.0.0.1:50914 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp240054577-2323 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x6a04b9d9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1269543214-2590 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data6/current/BP-1022496820-172.31.14.131-1690222261555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp772451915-2284 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 962611126@qtp-87974605-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42373 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Listener at localhost/45633-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x7651fb37 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c-prefix:jenkins-hbase4.apache.org,41307,1690222262965 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:56931 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x4c50ab29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x00d12096-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: 398075203@qtp-863611409-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46285 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-6093abee-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222263406 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: Listener at localhost/45633.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: qtp240054577-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-822041179_17 at /127.0.0.1:51770 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:33823 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 36565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1269543214-2586 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:33823 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1269543214-2593 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2b70893a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data2/current/BP-1022496820-172.31.14.131-1690222261555 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x7651fb37-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp240054577-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp772451915-2288 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:33823 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/42673-SendThread(127.0.0.1:57771) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x6a4b14d6-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:41307-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1623744952-2311 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1583719811.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 36565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38391 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ProcessThread(sid:0 cport:56931): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:59372 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33823 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp772451915-2286 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 196450505@qtp-1526616028-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34981 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Server handler 3 on default port 45633 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:40065 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:38391 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1431084588_17 at /127.0.0.1:51766 [Receiving block BP-1022496820-172.31.14.131-1690222261555:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:40065 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:33823 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x00d12096 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/2063425092.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 36565 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42795 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1776488039-2255 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:57771@0x3c9a544d-SendThread(127.0.0.1:57771) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@afceaf0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1269543214-2591 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: hconnection-0x28ac0a0-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56931@0x0e8df158-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp772451915-2287 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x3af88b82-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1081679790@qtp-238111672-0 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45727 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36991 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1022496820-172.31.14.131-1690222261555:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35379 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-7ba43026-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@677eadb8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45633-SendThread(127.0.0.1:56931) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@8c04e09 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (2124007137) connection to localhost/127.0.0.1:40065 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=841 (was 803) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=587 (was 576) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=5229 (was 5401) 2023-07-24 18:11:04,750 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-24 18:11:04,768 INFO [Listener at localhost/45633] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=557, OpenFileDescriptor=841, MaxFileDescriptor=60000, SystemLoadAverage=587, ProcessCount=177, AvailableMemoryMB=5229 2023-07-24 18:11:04,768 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-24 18:11:04,768 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-24 18:11:04,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:04,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:04,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:04,773 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:04,773 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:04,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:04,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:04,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:04,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:04,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:04,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:04,781 INFO [RS:3;jenkins-hbase4:42795] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42795%2C1690222264476, suffix=, logDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,42795,1690222264476, archiveDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs, maxLogs=32 2023-07-24 18:11:04,781 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:04,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:04,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:04,785 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:04,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:04,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:04,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:04,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:04,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:04,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:04,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:49642 deadline: 1690223464793, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:04,795 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:04,797 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:04,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:04,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:04,800 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:04,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:04,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:04,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:04,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 18:11:04,809 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK] 2023-07-24 18:11:04,809 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK] 2023-07-24 18:11:04,810 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK] 2023-07-24 18:11:04,810 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:11:04,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-24 18:11:04,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:11:04,819 INFO [RS:3;jenkins-hbase4:42795] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/WALs/jenkins-hbase4.apache.org,42795,1690222264476/jenkins-hbase4.apache.org%2C42795%2C1690222264476.1690222264781 2023-07-24 18:11:04,820 DEBUG [RS:3;jenkins-hbase4:42795] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46613,DS-dedd1d5f-2249-49d5-974b-4438c709f00b,DISK], DatanodeInfoWithStorage[127.0.0.1:37471,DS-03e2bd2b-6634-4f90-bcc5-c7d25437e2d1,DISK], DatanodeInfoWithStorage[127.0.0.1:42231,DS-e2933bd8-7d89-4050-a490-7eaec03ac5ae,DISK]] 2023-07-24 18:11:04,820 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:04,820 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:04,821 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:04,823 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-24 18:11:04,824 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:04,825 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 empty. 2023-07-24 18:11:04,825 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:04,826 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 18:11:04,838 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-24 18:11:04,839 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => d27b85ef9846fbe7ced1abc8913781c4, NAME => 't1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp 2023-07-24 18:11:04,850 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:04,850 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing d27b85ef9846fbe7ced1abc8913781c4, disabling compactions & flushes 2023-07-24 18:11:04,850 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:04,850 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:04,850 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. after waiting 0 ms 2023-07-24 18:11:04,850 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:04,850 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:04,850 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for d27b85ef9846fbe7ced1abc8913781c4: 2023-07-24 18:11:04,853 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-24 18:11:04,854 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222264854"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222264854"}]},"ts":"1690222264854"} 2023-07-24 18:11:04,855 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-24 18:11:04,856 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-24 18:11:04,856 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222264856"}]},"ts":"1690222264856"} 2023-07-24 18:11:04,857 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-24 18:11:04,861 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-24 18:11:04,861 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-24 18:11:04,861 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-24 18:11:04,861 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-24 18:11:04,861 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-24 18:11:04,861 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-24 18:11:04,861 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, ASSIGN}] 2023-07-24 18:11:04,862 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, ASSIGN 2023-07-24 18:11:04,863 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41307,1690222262965; forceNewPlan=false, retain=false 2023-07-24 18:11:04,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:11:05,013 INFO [jenkins-hbase4:36991] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-24 18:11:05,015 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=d27b85ef9846fbe7ced1abc8913781c4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:05,015 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222265015"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222265015"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222265015"}]},"ts":"1690222265015"} 2023-07-24 18:11:05,017 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure d27b85ef9846fbe7ced1abc8913781c4, server=jenkins-hbase4.apache.org,41307,1690222262965}] 2023-07-24 18:11:05,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:11:05,170 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:05,170 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-24 18:11:05,172 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:36354, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-24 18:11:05,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d27b85ef9846fbe7ced1abc8913781c4, NAME => 't1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.', STARTKEY => '', ENDKEY => ''} 2023-07-24 18:11:05,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-24 18:11:05,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,178 INFO [StoreOpener-d27b85ef9846fbe7ced1abc8913781c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,179 DEBUG [StoreOpener-d27b85ef9846fbe7ced1abc8913781c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/cf1 2023-07-24 18:11:05,179 DEBUG [StoreOpener-d27b85ef9846fbe7ced1abc8913781c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/cf1 2023-07-24 18:11:05,179 INFO [StoreOpener-d27b85ef9846fbe7ced1abc8913781c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d27b85ef9846fbe7ced1abc8913781c4 columnFamilyName cf1 2023-07-24 18:11:05,180 INFO [StoreOpener-d27b85ef9846fbe7ced1abc8913781c4-1] regionserver.HStore(310): Store=d27b85ef9846fbe7ced1abc8913781c4/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-24 18:11:05,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-24 18:11:05,185 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d27b85ef9846fbe7ced1abc8913781c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9549442560, jitterRate=-0.11063885688781738}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-24 18:11:05,185 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d27b85ef9846fbe7ced1abc8913781c4: 2023-07-24 18:11:05,186 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4., pid=14, masterSystemTime=1690222265170 2023-07-24 18:11:05,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,191 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=d27b85ef9846fbe7ced1abc8913781c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:05,191 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222265191"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690222265191"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690222265191"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690222265191"}]},"ts":"1690222265191"} 2023-07-24 18:11:05,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-24 18:11:05,194 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure d27b85ef9846fbe7ced1abc8913781c4, server=jenkins-hbase4.apache.org,41307,1690222262965 in 176 msec 2023-07-24 18:11:05,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-24 18:11:05,196 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, ASSIGN in 333 msec 2023-07-24 18:11:05,196 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-24 18:11:05,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222265196"}]},"ts":"1690222265196"} 2023-07-24 18:11:05,197 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-24 18:11:05,200 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-24 18:11:05,201 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 398 msec 2023-07-24 18:11:05,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-24 18:11:05,425 INFO [Listener at localhost/45633] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-24 18:11:05,425 DEBUG [Listener at localhost/45633] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-24 18:11:05,425 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:05,427 INFO [Listener at localhost/45633] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-24 18:11:05,427 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:05,427 INFO [Listener at localhost/45633] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-24 18:11:05,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-24 18:11:05,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-24 18:11:05,432 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-24 18:11:05,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-24 18:11:05,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:49642 deadline: 1690222325429, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-24 18:11:05,434 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:05,435 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-24 18:11:05,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:05,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:05,536 INFO [Listener at localhost/45633] client.HBaseAdmin$15(890): Started disable of t1 2023-07-24 18:11:05,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-24 18:11:05,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-24 18:11:05,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 18:11:05,542 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222265541"}]},"ts":"1690222265541"} 2023-07-24 18:11:05,543 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-24 18:11:05,545 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-24 18:11:05,546 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, UNASSIGN}] 2023-07-24 18:11:05,546 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, UNASSIGN 2023-07-24 18:11:05,547 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=d27b85ef9846fbe7ced1abc8913781c4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:05,547 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222265547"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690222265547"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690222265547"}]},"ts":"1690222265547"} 2023-07-24 18:11:05,549 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure d27b85ef9846fbe7ced1abc8913781c4, server=jenkins-hbase4.apache.org,41307,1690222262965}] 2023-07-24 18:11:05,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 18:11:05,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d27b85ef9846fbe7ced1abc8913781c4, disabling compactions & flushes 2023-07-24 18:11:05,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. after waiting 0 ms 2023-07-24 18:11:05,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-24 18:11:05,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4. 2023-07-24 18:11:05,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d27b85ef9846fbe7ced1abc8913781c4: 2023-07-24 18:11:05,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,707 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=d27b85ef9846fbe7ced1abc8913781c4, regionState=CLOSED 2023-07-24 18:11:05,708 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690222265707"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690222265707"}]},"ts":"1690222265707"} 2023-07-24 18:11:05,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-24 18:11:05,710 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure d27b85ef9846fbe7ced1abc8913781c4, server=jenkins-hbase4.apache.org,41307,1690222262965 in 160 msec 2023-07-24 18:11:05,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-24 18:11:05,711 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=d27b85ef9846fbe7ced1abc8913781c4, UNASSIGN in 164 msec 2023-07-24 18:11:05,718 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690222265717"}]},"ts":"1690222265717"} 2023-07-24 18:11:05,719 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-24 18:11:05,721 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-24 18:11:05,724 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 187 msec 2023-07-24 18:11:05,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-24 18:11:05,842 INFO [Listener at localhost/45633] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-24 18:11:05,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-24 18:11:05,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-24 18:11:05,846 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 18:11:05,846 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-24 18:11:05,847 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-24 18:11:05,848 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:05,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:05,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:05,851 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 18:11:05,853 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/cf1, FileablePath, hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/recovered.edits] 2023-07-24 18:11:05,859 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/recovered.edits/4.seqid to hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/archive/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4/recovered.edits/4.seqid 2023-07-24 18:11:05,860 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/.tmp/data/default/t1/d27b85ef9846fbe7ced1abc8913781c4 2023-07-24 18:11:05,860 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-24 18:11:05,862 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-24 18:11:05,863 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-24 18:11:05,865 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-24 18:11:05,866 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-24 18:11:05,866 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-24 18:11:05,866 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690222265866"}]},"ts":"9223372036854775807"} 2023-07-24 18:11:05,867 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-24 18:11:05,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => d27b85ef9846fbe7ced1abc8913781c4, NAME => 't1,,1690222264802.d27b85ef9846fbe7ced1abc8913781c4.', STARTKEY => '', ENDKEY => ''}] 2023-07-24 18:11:05,867 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-24 18:11:05,867 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690222265867"}]},"ts":"9223372036854775807"} 2023-07-24 18:11:05,868 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-24 18:11:05,871 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-24 18:11:05,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 28 msec 2023-07-24 18:11:05,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-24 18:11:05,953 INFO [Listener at localhost/45633] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-24 18:11:05,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:05,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:05,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:05,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:05,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:05,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:05,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:05,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:05,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:05,962 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:05,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:05,969 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:05,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:05,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:05,971 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:05,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:05,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:05,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:05,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:05,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:05,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:05,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:49642 deadline: 1690223465979, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:05,980 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:05,983 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:05,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:05,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:05,984 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:05,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:05,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,005 INFO [Listener at localhost/45633] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=569 (was 557) - Thread LEAK? -, OpenFileDescriptor=847 (was 841) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=587 (was 587), ProcessCount=177 (was 177), AvailableMemoryMB=5145 (was 5229) 2023-07-24 18:11:06,005 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-24 18:11:06,025 INFO [Listener at localhost/45633] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=587, ProcessCount=177, AvailableMemoryMB=5146 2023-07-24 18:11:06,025 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-24 18:11:06,025 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-24 18:11:06,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:06,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:06,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,044 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:06,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:06,047 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:06,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:06,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:49642 deadline: 1690223466056, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:06,057 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:06,059 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:06,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,060 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:06,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:06,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 18:11:06,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:11:06,062 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-24 18:11:06,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-24 18:11:06,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-24 18:11:06,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:06,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:06,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,082 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:06,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:06,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:06,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:06,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:49642 deadline: 1690223466091, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:06,092 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:06,093 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:06,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,094 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:06,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:06,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,115 INFO [Listener at localhost/45633] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571 (was 569) - Thread LEAK? -, OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=587 (was 587), ProcessCount=177 (was 177), AvailableMemoryMB=5148 (was 5146) - AvailableMemoryMB LEAK? - 2023-07-24 18:11:06,115 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 18:11:06,135 INFO [Listener at localhost/45633] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=587, ProcessCount=177, AvailableMemoryMB=5147 2023-07-24 18:11:06,135 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-24 18:11:06,135 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-24 18:11:06,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:06,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:06,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,148 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:06,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:06,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,156 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:06,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,158 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:11:06,158 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-24 18:11:06,159 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:06,159 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-24 18:11:06,159 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:11:06,159 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-24 18:11:06,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:06,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:49642 deadline: 1690223466161, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:06,162 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:06,164 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:06,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,165 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:06,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:06,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:06,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,183 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:06,184 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:06,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:06,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:06,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:49642 deadline: 1690223466193, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:06,194 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:06,196 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:06,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,197 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:06,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:06,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,220 INFO [Listener at localhost/45633] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572 (was 571) - Thread LEAK? -, OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=587 (was 587), ProcessCount=177 (was 177), AvailableMemoryMB=5147 (was 5147) 2023-07-24 18:11:06,220 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-24 18:11:06,240 INFO [Listener at localhost/45633] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572, OpenFileDescriptor=847, MaxFileDescriptor=60000, SystemLoadAverage=587, ProcessCount=177, AvailableMemoryMB=5146 2023-07-24 18:11:06,240 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-24 18:11:06,240 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-24 18:11:06,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:06,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:06,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,256 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:06,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:06,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:06,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:06,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:49642 deadline: 1690223466267, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:06,268 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:06,269 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:06,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,270 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:06,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:06,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,271 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-24 18:11:06,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-24 18:11:06,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 18:11:06,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-24 18:11:06,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,278 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 18:11:06,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:11:06,289 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:11:06,294 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-24 18:11:06,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-24 18:11:06,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 18:11:06,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:49642 deadline: 1690223466385, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-24 18:11:06,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-24 18:11:06,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 18:11:06,409 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 18:11:06,410 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 17 msec 2023-07-24 18:11:06,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-24 18:11:06,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-24 18:11:06,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 18:11:06,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-24 18:11:06,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-24 18:11:06,516 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-24 18:11:06,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,523 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,525 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 18:11:06,526 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,528 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-24 18:11:06,528 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-24 18:11:06,528 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,529 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-24 18:11:06,530 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 8 msec 2023-07-24 18:11:06,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-24 18:11:06,627 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-24 18:11:06,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-24 18:11:06,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-24 18:11:06,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,634 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:49642 deadline: 1690222326636, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-24 18:11:06,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,640 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,641 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-24 18:11:06,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-24 18:11:06,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-24 18:11:06,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-24 18:11:06,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-24 18:11:06,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-24 18:11:06,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-24 18:11:06,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-24 18:11:06,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-24 18:11:06,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-24 18:11:06,655 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-24 18:11:06,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-24 18:11:06,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-24 18:11:06,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-24 18:11:06,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-24 18:11:06,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-24 18:11:06,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,664 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:36991] to rsgroup master 2023-07-24 18:11:06,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-24 18:11:06,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:49642 deadline: 1690223466665, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. 2023-07-24 18:11:06,666 WARN [Listener at localhost/45633] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:36991 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-24 18:11:06,667 INFO [Listener at localhost/45633] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-24 18:11:06,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-24 18:11:06,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-24 18:11:06,668 INFO [Listener at localhost/45633] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35379, jenkins-hbase4.apache.org:38391, jenkins-hbase4.apache.org:41307, jenkins-hbase4.apache.org:42795], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-24 18:11:06,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-24 18:11:06,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36991] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-24 18:11:06,687 INFO [Listener at localhost/45633] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572 (was 572), OpenFileDescriptor=847 (was 847), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=587 (was 587), ProcessCount=177 (was 177), AvailableMemoryMB=5156 (was 5146) - AvailableMemoryMB LEAK? - 2023-07-24 18:11:06,687 WARN [Listener at localhost/45633] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-24 18:11:06,687 INFO [Listener at localhost/45633] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-24 18:11:06,687 INFO [Listener at localhost/45633] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-24 18:11:06,687 DEBUG [Listener at localhost/45633] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4c50ab29 to 127.0.0.1:56931 2023-07-24 18:11:06,687 DEBUG [Listener at localhost/45633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,688 DEBUG [Listener at localhost/45633] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-24 18:11:06,688 DEBUG [Listener at localhost/45633] util.JVMClusterUtil(257): Found active master hash=836616347, stopped=false 2023-07-24 18:11:06,688 DEBUG [Listener at localhost/45633] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-24 18:11:06,688 DEBUG [Listener at localhost/45633] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-24 18:11:06,688 INFO [Listener at localhost/45633] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:06,690 INFO [Listener at localhost/45633] procedure2.ProcedureExecutor(629): Stopping 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-24 18:11:06,690 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:06,690 DEBUG [Listener at localhost/45633] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a647a4e to 127.0.0.1:56931 2023-07-24 18:11:06,691 DEBUG [Listener at localhost/45633] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,691 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35379,1690222262627' ***** 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:06,691 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38391,1690222262805' ***** 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:06,691 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41307,1690222262965' ***** 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:06,691 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:06,691 INFO [Listener at localhost/45633] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42795,1690222264476' ***** 2023-07-24 18:11:06,691 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:06,692 INFO [Listener at localhost/45633] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-24 18:11:06,695 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:06,695 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:06,699 INFO [RS:0;jenkins-hbase4:35379] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5e71051c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:06,699 INFO [RS:2;jenkins-hbase4:41307] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3324e9d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:06,699 INFO [RS:1;jenkins-hbase4:38391] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3c048bae{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:06,699 INFO [RS:3;jenkins-hbase4:42795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6d18f4b1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-24 18:11:06,699 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-24 18:11:06,699 INFO [RS:0;jenkins-hbase4:35379] server.AbstractConnector(383): Stopped ServerConnector@5f7a00b2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:06,699 INFO [RS:2;jenkins-hbase4:41307] server.AbstractConnector(383): Stopped ServerConnector@444ab1ff{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:06,699 INFO [RS:3;jenkins-hbase4:42795] server.AbstractConnector(383): Stopped ServerConnector@78269785{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:06,700 INFO [RS:1;jenkins-hbase4:38391] server.AbstractConnector(383): Stopped ServerConnector@2d998fb6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:06,699 INFO [RS:0;jenkins-hbase4:35379] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:06,700 INFO [RS:1;jenkins-hbase4:38391] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:06,700 INFO [RS:3;jenkins-hbase4:42795] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:06,700 INFO [RS:2;jenkins-hbase4:41307] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:06,702 INFO [RS:1;jenkins-hbase4:38391] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1799bcfb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:06,702 INFO [RS:3;jenkins-hbase4:42795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@28beacb7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:06,704 INFO [RS:1;jenkins-hbase4:38391] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@802ef93{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:06,705 INFO [RS:3;jenkins-hbase4:42795] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4a3ed2e7{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:06,701 INFO [RS:0;jenkins-hbase4:35379] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6344f3ff{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:06,703 INFO [RS:2;jenkins-hbase4:41307] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6499b8a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:06,706 INFO [RS:0;jenkins-hbase4:35379] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e767489{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:06,706 INFO [RS:2;jenkins-hbase4:41307] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@478dee5b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:06,707 INFO [RS:1;jenkins-hbase4:38391] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:06,707 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:06,707 INFO [RS:3;jenkins-hbase4:42795] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:06,707 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:06,707 INFO [RS:2;jenkins-hbase4:41307] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:06,707 INFO [RS:1;jenkins-hbase4:38391] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:06,707 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:06,707 INFO [RS:1;jenkins-hbase4:38391] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:06,707 INFO [RS:3;jenkins-hbase4:42795] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:06,707 INFO [RS:2;jenkins-hbase4:41307] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:06,708 INFO [RS:2;jenkins-hbase4:41307] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:06,708 INFO [RS:3;jenkins-hbase4:42795] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:06,708 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(3305): Received CLOSE for 8499050dc118b7510fe1f9c83ad81c50 2023-07-24 18:11:06,708 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:06,708 INFO [RS:0;jenkins-hbase4:35379] regionserver.HeapMemoryManager(220): Stopping 2023-07-24 18:11:06,708 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:06,708 DEBUG [RS:3;jenkins-hbase4:42795] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a4b14d6 to 127.0.0.1:56931 2023-07-24 18:11:06,708 DEBUG [RS:2;jenkins-hbase4:41307] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6a04b9d9 to 127.0.0.1:56931 2023-07-24 18:11:06,708 DEBUG [RS:2;jenkins-hbase4:41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,708 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41307,1690222262965; all regions closed. 2023-07-24 18:11:06,708 DEBUG [RS:3;jenkins-hbase4:42795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,709 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42795,1690222264476; all regions closed. 2023-07-24 18:11:06,709 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(3305): Received CLOSE for a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:06,709 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:06,709 DEBUG [RS:1;jenkins-hbase4:38391] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x00d12096 to 127.0.0.1:56931 2023-07-24 18:11:06,709 DEBUG [RS:1;jenkins-hbase4:38391] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,710 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-24 18:11:06,710 INFO [RS:0;jenkins-hbase4:35379] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-24 18:11:06,709 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8499050dc118b7510fe1f9c83ad81c50, disabling compactions & flushes 2023-07-24 18:11:06,710 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:06,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:06,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. after waiting 0 ms 2023-07-24 18:11:06,710 INFO [RS:0;jenkins-hbase4:35379] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-24 18:11:06,710 INFO [RS:1;jenkins-hbase4:38391] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:06,711 INFO [RS:1;jenkins-hbase4:38391] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:06,711 INFO [RS:1;jenkins-hbase4:38391] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:06,710 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:06,711 DEBUG [RS:0;jenkins-hbase4:35379] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7651fb37 to 127.0.0.1:56931 2023-07-24 18:11:06,710 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:06,711 DEBUG [RS:0;jenkins-hbase4:35379] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,711 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35379,1690222262627; all regions closed. 2023-07-24 18:11:06,711 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-24 18:11:06,711 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 8499050dc118b7510fe1f9c83ad81c50 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-24 18:11:06,716 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-24 18:11:06,716 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 8499050dc118b7510fe1f9c83ad81c50=hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50., a19b2f2bc597559d6e5be813a2e02e14=hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14.} 2023-07-24 18:11:06,716 DEBUG [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1504): Waiting on 1588230740, 8499050dc118b7510fe1f9c83ad81c50, a19b2f2bc597559d6e5be813a2e02e14 2023-07-24 18:11:06,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-24 18:11:06,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-24 18:11:06,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-24 18:11:06,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-24 18:11:06,716 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-24 18:11:06,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-24 18:11:06,722 DEBUG [RS:2;jenkins-hbase4:41307] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs 2023-07-24 18:11:06,724 INFO [RS:2;jenkins-hbase4:41307] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41307%2C1690222262965:(num 1690222263662) 2023-07-24 18:11:06,724 DEBUG [RS:2;jenkins-hbase4:41307] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,724 INFO [RS:2;jenkins-hbase4:41307] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,724 INFO [RS:2;jenkins-hbase4:41307] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:06,724 INFO [RS:2;jenkins-hbase4:41307] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:06,724 INFO [RS:2;jenkins-hbase4:41307] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:06,724 INFO [RS:2;jenkins-hbase4:41307] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:06,724 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:06,727 INFO [RS:2;jenkins-hbase4:41307] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41307 2023-07-24 18:11:06,728 DEBUG [RS:0;jenkins-hbase4:35379] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs 2023-07-24 18:11:06,728 INFO [RS:0;jenkins-hbase4:35379] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35379%2C1690222262627:(num 1690222263667) 2023-07-24 18:11:06,728 DEBUG [RS:0;jenkins-hbase4:35379] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,728 INFO [RS:0;jenkins-hbase4:35379] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,728 INFO [RS:0;jenkins-hbase4:35379] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:06,728 INFO [RS:0;jenkins-hbase4:35379] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:06,728 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:06,728 INFO [RS:0;jenkins-hbase4:35379] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:06,729 INFO [RS:0;jenkins-hbase4:35379] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:06,730 INFO [RS:0;jenkins-hbase4:35379] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35379 2023-07-24 18:11:06,730 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,730 DEBUG [RS:3;jenkins-hbase4:42795] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs 2023-07-24 18:11:06,730 INFO [RS:3;jenkins-hbase4:42795] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42795%2C1690222264476:(num 1690222264781) 2023-07-24 18:11:06,730 DEBUG [RS:3;jenkins-hbase4:42795] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,730 INFO [RS:3;jenkins-hbase4:42795] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,731 INFO [RS:3;jenkins-hbase4:42795] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:06,731 INFO [RS:3;jenkins-hbase4:42795] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-24 18:11:06,731 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:06,731 INFO [RS:3;jenkins-hbase4:42795] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-24 18:11:06,731 INFO [RS:3;jenkins-hbase4:42795] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-24 18:11:06,732 INFO [RS:3;jenkins-hbase4:42795] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42795 2023-07-24 18:11:06,737 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,738 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,747 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/.tmp/info/b2141b8aacc4468f9d02cdcbbeba2902 2023-07-24 18:11:06,757 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,757 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/.tmp/info/f1bad2ee45a54563bcf53603777358eb 2023-07-24 18:11:06,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b2141b8aacc4468f9d02cdcbbeba2902 2023-07-24 18:11:06,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/.tmp/info/b2141b8aacc4468f9d02cdcbbeba2902 as hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/info/b2141b8aacc4468f9d02cdcbbeba2902 2023-07-24 18:11:06,764 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1bad2ee45a54563bcf53603777358eb 2023-07-24 18:11:06,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b2141b8aacc4468f9d02cdcbbeba2902 2023-07-24 18:11:06,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/info/b2141b8aacc4468f9d02cdcbbeba2902, entries=3, sequenceid=9, filesize=5.0 K 2023-07-24 18:11:06,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 8499050dc118b7510fe1f9c83ad81c50 in 56ms, sequenceid=9, compaction requested=false 2023-07-24 18:11:06,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/namespace/8499050dc118b7510fe1f9c83ad81c50/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-24 18:11:06,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:06,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8499050dc118b7510fe1f9c83ad81c50: 2023-07-24 18:11:06,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690222263887.8499050dc118b7510fe1f9c83ad81c50. 2023-07-24 18:11:06,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a19b2f2bc597559d6e5be813a2e02e14, disabling compactions & flushes 2023-07-24 18:11:06,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:06,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:06,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. after waiting 0 ms 2023-07-24 18:11:06,777 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:06,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a19b2f2bc597559d6e5be813a2e02e14 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-24 18:11:06,778 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/.tmp/rep_barrier/1be678993e08405a9cfce8ad8d1b9e35 2023-07-24 18:11:06,784 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1be678993e08405a9cfce8ad8d1b9e35 2023-07-24 18:11:06,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/.tmp/m/6a683e07c8f24d72a51eb6bb6c4bedd6 2023-07-24 18:11:06,802 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/.tmp/table/b9f6f72d1c854d25b516dd78b4865557 2023-07-24 18:11:06,805 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6a683e07c8f24d72a51eb6bb6c4bedd6 2023-07-24 18:11:06,805 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/.tmp/m/6a683e07c8f24d72a51eb6bb6c4bedd6 as hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/m/6a683e07c8f24d72a51eb6bb6c4bedd6 2023-07-24 18:11:06,807 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b9f6f72d1c854d25b516dd78b4865557 2023-07-24 18:11:06,808 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/.tmp/info/f1bad2ee45a54563bcf53603777358eb as hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/info/f1bad2ee45a54563bcf53603777358eb 2023-07-24 18:11:06,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6a683e07c8f24d72a51eb6bb6c4bedd6 2023-07-24 18:11:06,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/m/6a683e07c8f24d72a51eb6bb6c4bedd6, entries=12, sequenceid=29, filesize=5.4 K 2023-07-24 18:11:06,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for a19b2f2bc597559d6e5be813a2e02e14 in 34ms, sequenceid=29, compaction requested=false 2023-07-24 18:11:06,815 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42795,1690222264476 2023-07-24 18:11:06,816 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:06,817 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:06,817 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35379,1690222262627] 2023-07-24 18:11:06,817 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35379,1690222262627; numProcessing=1 2023-07-24 18:11:06,818 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35379,1690222262627 2023-07-24 18:11:06,818 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41307,1690222262965 2023-07-24 18:11:06,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f1bad2ee45a54563bcf53603777358eb 2023-07-24 18:11:06,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/info/f1bad2ee45a54563bcf53603777358eb, entries=22, sequenceid=26, filesize=7.3 K 2023-07-24 18:11:06,819 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/.tmp/rep_barrier/1be678993e08405a9cfce8ad8d1b9e35 as hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/rep_barrier/1be678993e08405a9cfce8ad8d1b9e35 2023-07-24 18:11:06,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/rsgroup/a19b2f2bc597559d6e5be813a2e02e14/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-24 18:11:06,820 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35379,1690222262627 already deleted, retry=false 2023-07-24 18:11:06,820 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35379,1690222262627 expired; onlineServers=3 2023-07-24 18:11:06,820 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42795,1690222264476] 2023-07-24 18:11:06,820 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42795,1690222264476; numProcessing=2 2023-07-24 18:11:06,820 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:06,821 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:06,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a19b2f2bc597559d6e5be813a2e02e14: 2023-07-24 18:11:06,821 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690222264006.a19b2f2bc597559d6e5be813a2e02e14. 2023-07-24 18:11:06,822 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42795,1690222264476 already deleted, retry=false 2023-07-24 18:11:06,823 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42795,1690222264476 expired; onlineServers=2 2023-07-24 18:11:06,823 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41307,1690222262965] 2023-07-24 18:11:06,823 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41307,1690222262965; numProcessing=3 2023-07-24 18:11:06,824 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41307,1690222262965 already deleted, retry=false 2023-07-24 18:11:06,824 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41307,1690222262965 expired; onlineServers=1 2023-07-24 18:11:06,825 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1be678993e08405a9cfce8ad8d1b9e35 2023-07-24 18:11:06,825 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/rep_barrier/1be678993e08405a9cfce8ad8d1b9e35, entries=1, sequenceid=26, filesize=4.9 K 2023-07-24 18:11:06,826 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/.tmp/table/b9f6f72d1c854d25b516dd78b4865557 as hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/table/b9f6f72d1c854d25b516dd78b4865557 2023-07-24 18:11:06,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b9f6f72d1c854d25b516dd78b4865557 2023-07-24 18:11:06,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/table/b9f6f72d1c854d25b516dd78b4865557, entries=6, sequenceid=26, filesize=5.1 K 2023-07-24 18:11:06,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 116ms, sequenceid=26, compaction requested=false 2023-07-24 18:11:06,833 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-24 18:11:06,842 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-24 18:11:06,843 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-24 18:11:06,843 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:06,843 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-24 18:11:06,843 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-24 18:11:06,916 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38391,1690222262805; all regions closed. 2023-07-24 18:11:06,921 DEBUG [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs 2023-07-24 18:11:06,921 INFO [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38391%2C1690222262805.meta:.meta(num 1690222263828) 2023-07-24 18:11:06,925 DEBUG [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/oldWALs 2023-07-24 18:11:06,925 INFO [RS:1;jenkins-hbase4:38391] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38391%2C1690222262805:(num 1690222263663) 2023-07-24 18:11:06,926 DEBUG [RS:1;jenkins-hbase4:38391] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,926 INFO [RS:1;jenkins-hbase4:38391] regionserver.LeaseManager(133): Closed leases 2023-07-24 18:11:06,926 INFO [RS:1;jenkins-hbase4:38391] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-24 18:11:06,926 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:06,927 INFO [RS:1;jenkins-hbase4:38391] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38391 2023-07-24 18:11:06,929 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38391,1690222262805 2023-07-24 18:11:06,929 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-24 18:11:06,930 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38391,1690222262805] 2023-07-24 18:11:06,930 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38391,1690222262805; numProcessing=4 2023-07-24 18:11:06,932 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38391,1690222262805 already deleted, retry=false 2023-07-24 18:11:06,932 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38391,1690222262805 expired; onlineServers=0 2023-07-24 18:11:06,932 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36991,1690222262434' ***** 2023-07-24 18:11:06,932 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-24 18:11:06,933 DEBUG [M:0;jenkins-hbase4:36991] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@35e24278, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-24 18:11:06,933 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-24 18:11:06,935 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-24 18:11:06,935 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-24 18:11:06,935 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-24 18:11:06,936 INFO [M:0;jenkins-hbase4:36991] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@54eb7694{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-24 18:11:06,936 INFO [M:0;jenkins-hbase4:36991] server.AbstractConnector(383): Stopped ServerConnector@399a6c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:06,936 INFO [M:0;jenkins-hbase4:36991] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-24 18:11:06,937 INFO [M:0;jenkins-hbase4:36991] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6994f8ba{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-24 18:11:06,937 INFO [M:0;jenkins-hbase4:36991] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@147da2bf{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/hadoop.log.dir/,STOPPED} 2023-07-24 18:11:06,937 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36991,1690222262434 2023-07-24 18:11:06,937 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36991,1690222262434; all regions closed. 2023-07-24 18:11:06,938 DEBUG [M:0;jenkins-hbase4:36991] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-24 18:11:06,938 INFO [M:0;jenkins-hbase4:36991] master.HMaster(1491): Stopping master jetty server 2023-07-24 18:11:06,938 INFO [M:0;jenkins-hbase4:36991] server.AbstractConnector(383): Stopped ServerConnector@891e88{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-24 18:11:06,939 DEBUG [M:0;jenkins-hbase4:36991] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-24 18:11:06,939 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-24 18:11:06,939 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222263406] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690222263406,5,FailOnTimeoutGroup] 2023-07-24 18:11:06,939 DEBUG [M:0;jenkins-hbase4:36991] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-24 18:11:06,939 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222263406] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690222263406,5,FailOnTimeoutGroup] 2023-07-24 18:11:06,939 INFO [M:0;jenkins-hbase4:36991] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-24 18:11:06,939 INFO [M:0;jenkins-hbase4:36991] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-24 18:11:06,939 INFO [M:0;jenkins-hbase4:36991] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-24 18:11:06,939 DEBUG [M:0;jenkins-hbase4:36991] master.HMaster(1512): Stopping service threads 2023-07-24 18:11:06,939 INFO [M:0;jenkins-hbase4:36991] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-24 18:11:06,939 ERROR [M:0;jenkins-hbase4:36991] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-24 18:11:06,939 INFO [M:0;jenkins-hbase4:36991] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-24 18:11:06,939 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-24 18:11:06,940 DEBUG [M:0;jenkins-hbase4:36991] zookeeper.ZKUtil(398): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-24 18:11:06,940 WARN [M:0;jenkins-hbase4:36991] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-24 18:11:06,940 INFO [M:0;jenkins-hbase4:36991] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-24 18:11:06,940 INFO [M:0;jenkins-hbase4:36991] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-24 18:11:06,940 DEBUG [M:0;jenkins-hbase4:36991] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-24 18:11:06,940 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:06,940 DEBUG [M:0;jenkins-hbase4:36991] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:06,940 DEBUG [M:0;jenkins-hbase4:36991] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-24 18:11:06,940 DEBUG [M:0;jenkins-hbase4:36991] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:06,940 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-24 18:11:06,951 INFO [M:0;jenkins-hbase4:36991] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8ea6d4ff839245eda1cb09e6e3d12087 2023-07-24 18:11:06,957 DEBUG [M:0;jenkins-hbase4:36991] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8ea6d4ff839245eda1cb09e6e3d12087 as hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8ea6d4ff839245eda1cb09e6e3d12087 2023-07-24 18:11:06,961 INFO [M:0;jenkins-hbase4:36991] regionserver.HStore(1080): Added hdfs://localhost:33823/user/jenkins/test-data/ef75b281-fbde-9129-8093-641cbb92470c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8ea6d4ff839245eda1cb09e6e3d12087, entries=22, sequenceid=175, filesize=11.1 K 2023-07-24 18:11:06,962 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78044, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 22ms, sequenceid=175, compaction requested=false 2023-07-24 18:11:06,964 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-24 18:11:06,964 DEBUG [M:0;jenkins-hbase4:36991] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-24 18:11:06,967 INFO [M:0;jenkins-hbase4:36991] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-24 18:11:06,967 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-24 18:11:06,968 INFO [M:0;jenkins-hbase4:36991] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36991 2023-07-24 18:11:06,969 DEBUG [M:0;jenkins-hbase4:36991] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,36991,1690222262434 already deleted, retry=false 2023-07-24 18:11:07,190 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,190 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): master:36991-0x10198877b860000, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,190 INFO [M:0;jenkins-hbase4:36991] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36991,1690222262434; zookeeper connection closed. 2023-07-24 18:11:07,290 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,290 INFO [RS:1;jenkins-hbase4:38391] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38391,1690222262805; zookeeper connection closed. 2023-07-24 18:11:07,291 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:38391-0x10198877b860002, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,291 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@215274d9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@215274d9 2023-07-24 18:11:07,391 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,391 INFO [RS:2;jenkins-hbase4:41307] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41307,1690222262965; zookeeper connection closed. 2023-07-24 18:11:07,391 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:41307-0x10198877b860003, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,391 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5fc7fa96] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5fc7fa96 2023-07-24 18:11:07,491 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,491 INFO [RS:3;jenkins-hbase4:42795] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42795,1690222264476; zookeeper connection closed. 2023-07-24 18:11:07,491 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:42795-0x10198877b86000b, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,491 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@413f31ec] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@413f31ec 2023-07-24 18:11:07,591 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,591 INFO [RS:0;jenkins-hbase4:35379] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35379,1690222262627; zookeeper connection closed. 2023-07-24 18:11:07,591 DEBUG [Listener at localhost/45633-EventThread] zookeeper.ZKWatcher(600): regionserver:35379-0x10198877b860001, quorum=127.0.0.1:56931, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-24 18:11:07,591 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@748635b8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@748635b8 2023-07-24 18:11:07,592 INFO [Listener at localhost/45633] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-24 18:11:07,592 WARN [Listener at localhost/45633] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:07,595 INFO [Listener at localhost/45633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:07,699 WARN [BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:07,699 WARN [BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1022496820-172.31.14.131-1690222261555 (Datanode Uuid 387f921a-20de-4d09-bc46-0213032753c9) service to localhost/127.0.0.1:33823 2023-07-24 18:11:07,699 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data5/current/BP-1022496820-172.31.14.131-1690222261555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:07,700 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data6/current/BP-1022496820-172.31.14.131-1690222261555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:07,701 WARN [Listener at localhost/45633] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:07,703 INFO [Listener at localhost/45633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:07,806 WARN [BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:07,806 WARN [BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1022496820-172.31.14.131-1690222261555 (Datanode Uuid b4f805ea-43ae-4482-b78d-3158d061c2ed) service to localhost/127.0.0.1:33823 2023-07-24 18:11:07,807 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data3/current/BP-1022496820-172.31.14.131-1690222261555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:07,807 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data4/current/BP-1022496820-172.31.14.131-1690222261555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:07,809 WARN [Listener at localhost/45633] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-24 18:11:07,811 INFO [Listener at localhost/45633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:07,914 WARN [BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-24 18:11:07,914 WARN [BP-1022496820-172.31.14.131-1690222261555 heartbeating to localhost/127.0.0.1:33823] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1022496820-172.31.14.131-1690222261555 (Datanode Uuid efd52962-d7a6-4f4d-9e37-fe140ee138f2) service to localhost/127.0.0.1:33823 2023-07-24 18:11:07,914 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data1/current/BP-1022496820-172.31.14.131-1690222261555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:07,915 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8e170ea7-3918-1780-2325-f416f12ebf76/cluster_605c4ad8-c2b6-16d5-ac16-27ce43001afc/dfs/data/data2/current/BP-1022496820-172.31.14.131-1690222261555] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-24 18:11:07,925 INFO [Listener at localhost/45633] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-24 18:11:08,039 INFO [Listener at localhost/45633] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-24 18:11:08,064 INFO [Listener at localhost/45633] hbase.HBaseTestingUtility(1293): Minicluster is down